id
stringlengths
9
16
pid
stringlengths
11
18
input
stringlengths
5.27k
352k
output
stringlengths
399
9.26k
gao_GAO-18-358
gao_GAO-18-358_0
Background TRICARE Regional Structure and Contracts Under TRICARE, beneficiaries may obtain health care through DOD’s system of military hospitals and clinics, referred to as military treatment facilities (MTF), or from civilian providers. DHA uses managed care support contractors to develop networks of civilian providers, referred to as network providers, to serve all TRICARE beneficiaries in geographic areas called Prime Service Areas. The contractors also perform other customer service functions, such as processing claims and assisting beneficiaries with finding providers. Each TRICARE region within the United States has a managed care support contractor. In July 2016, DOD awarded its fourth generation of TRICARE managed care support contracts. The new contracts reduced the number of TRICARE regions from three (North, South, and West) to two (East and West). On January 1, 2018, the TRICARE program began health care delivery under these contracts. TRICARE’s Health Plan Options Prior to January 1, 2018, TRICARE’s non-Medicare-eligible beneficiary population could obtain care through three basic health plan options— TRICARE Prime (managed care), TRICARE Standard (fee-for-service), and TRICARE Extra (preferred provider organization)—that varied by enrollment requirements, choices in civilian providers, and whether there were established access standards. Beginning January 1, 2018, the TRICARE Standard and Extra options were terminated and TRICARE Select, a self-managed, preferred provider option, was established. (See table 1.) Beneficiaries using the TRICARE Standard and Extra options as of December 31, 2017, were automatically enrolled in TRICARE Select on January 1, 2018. Beneficiaries are allowed to change their plan at any time prior to January 1, 2019, after which they will only be able to change plans during an annual open enrollment season or within a certain time period following a qualifying life event. In August 2017, DOD estimated that over 2 million beneficiaries will be enrolled in TRICARE Select, which is approximately the same number of beneficiaries who used the TRICARE Standard option. According to DOD, approximately 66 percent of these beneficiaries resided in a Prime Service Area—where networks of civilian providers have been established. TRICARE Select Implementation Plan In addition to establishing the TRICARE Select option and making other TRICARE program changes, the NDAA 2017 required DOD to develop an implementation plan for TRICARE Select that includes seven specific elements. These elements are, in part, intended to ensure beneficiaries’ access to care under the TRICARE Select option, and they require DOD to ensure that at least 85 percent of the beneficiary population under TRICARE Select is covered by the network by January 1, 2018 (Element A); ensure access standards for appointments for health care that meet or exceed those of high-performing health care systems in the United States, as determined by the Secretary (Element B); establish mechanisms for monitoring compliance with access standards (Element C); establish health care provider-to-beneficiary ratios (Element D); monitor on a monthly basis complaints by beneficiaries with respect to network adequacy and the availability of health care providers (Element E); establish requirements for mechanisms to monitor the responses to complaints by beneficiaries (Element F); and establish mechanisms to evaluate the quality metrics of the network providers established under section 728 (of the NDAA 2017) (Element G). DOD’s TRICARE Select Implementation Plan Included the Mandated Elements and Addressed Most Leading Planning Practices, but Does Not Reflect Current Approach for Access Standards DOD’s Implementation Plan Included the Mandated Elements and Addressed Most Leading Practices for Strategic Management Planning The TRICARE Select implementation plan DOD submitted to Congress included the seven specific elements mandated by the NDAA 2017. Specifically, the implementation plan described the upcoming changes to the TRICARE benefit and included individual sections outlining DOD’s approach for implementing each of the required elements. For example, for element A—ensure that at least 85 percent of the beneficiary population under TRICARE Select is covered by the network by January 1, 2018—DOD described, among other things, how the regional contractors will identify geographic areas with concentrations of TRICARE Select beneficiaries and how they will establish a sufficient provider network to serve that population. We also found that the implementation plan reflected most of the leading practices for sound strategic management planning as identified by our prior work. (See table 2.) These leading practices suggest that strategic planning documents include the following: (1) a mission statement, (2) goals, (3) strategies to achieve goals, (4) plans to assess progress, and (5) identification of challenges and risks. For example, DOD’s implementation plan clearly articulated a mission statement, which is “to ensure beneficiaries receive the right level of care, at the right time, delivered by the right provider.” Additionally, for six of the mandated elements, DOD’s implementation plan outlined the goal, strategies to achieve the goal, and how DOD will assess progress. (See elements A, B, C, D, E, and F in table 2.) This information is supplemented by contract documents that require specific plans and data reports from the managed care support contractors. For example, for element A—ensure that at least 85 percent of the beneficiary population is covered by the network— each managed care support contractor is required to submit monthly performance reports that show that a sufficient number of providers in primary and specialty care are available to meet access requirements. While DOD’s implementation plan addressed many of our leading practices, there were instances where some of the leading practices were only partially addressed or not addressed at all. For example, none of the mandated elements incorporated the leading practice related to identifying the challenges and risks that could affect the success of the element. Element G, Evaluation of Quality Metrics, Remains under Development We also found that the implementation plan partially addressed or did not address the leading practices related to strategies or plans to assess progress for element G—establish mechanisms to evaluate the quality metrics of the network providers. The plan stated that DOD is reviewing the required set of core quality performance metrics and will implement a subset of these performance measures that can be used in future contracts. However, the plan did not include several strategic details such as (1) the process that DOD will use to determine the metrics, (2) the criteria and resources that are needed to select the subset of these performance measures, and (3) how DOD will assess progress and evaluate future metrics. DOD officials told us that a workgroup of departmental officials—including those from DHA and the TRICARE Regional Offices and representatives of the military service branches— with expertise in health care quality are evaluating the metrics for inclusion in the subset of measures based on criteria such as availability of data, the size of the population affected, and resources needed; developing a work plan and time frames to analyze the metrics that are (1) already being reported, (2) not being reported but data are available, and (3) not being reported and require data solutions in order to track information; and making preliminary recommendations on which measures to adopt and which to consider for future adoption. DOD officials explained that some of the details of their approach to the mandated elements had not been finalized when they were completing the implementation plan, including some of the details for element G, which continues to be a work in progress. They added that they were under tight time constraints with competing priorities. They explained that they had to plan for the implementation of TRICARE Select while concurrently transitioning to new managed care support contracts, which had to be modified to incorporate this new health plan option. Therefore, while DOD officials were developing the TRICARE Select implementation plan, they had to determine the specific program requirements for this option and modify the contracts to account for these changes. Leading Practice on Challenges and Risks May Be Captured through Contract Oversight Mechanisms We also found that DOD’s implementation plan did not address the leading practice related to recognizing the challenges or risks to success for any of the seven elements. This practice ensures that an organization considers any external factors that could significantly affect the achievement of its goal. For example, for element A—ensure that at least 85 percent of the beneficiary population is covered by the network—the implementation plan did not address what challenges and risks the contractors might experience in establishing this network. For example, one of the two managed care support contractors stated that it did not have data on the beneficiaries who had sought coverage under TRICARE Standard and Extra as these beneficiaries did not have to enroll in these health plan options. Thus, the contractor explained that it was difficult to establish a baseline for calculating the 85 percent network coverage required for TRICARE Select. The other managed care support contractor told us that specific challenges included negotiating provider discounts in certain areas, identifying which providers participated in the past, and balancing the composition of the network between primary and specialty care. However, DOD officials told us that they considered and planned for the challenges and risks associated with certain elements—including establishing a monitoring and remediation process to help ensure contractors meet the 85 percent network coverage requirement—even though this was not described in the plan. DOD officials explained that their approach to the implementation plan was to create a strategic overview rather than a detailed work plan. These officials also told us that details and time frames related to the mandated elements are captured in contract documents, such as those that establish the managed care support contractors’ reporting and planning requirements. Although these contract documents do not specifically address challenges and risks for each element, officials stated that they have oversight mechanisms in place that allow them to address any challenges faced by these contractors, thereby mitigating any potential risks. For example, DHA officials told us that the managed care support contractors provide status updates on their network expansion progress at weekly transition meetings with DHA and at biweekly meetings with the TRICARE Regional Offices. DHA officials told us that the TRICARE contracts have specific expansion goals and deadlines, such as requiring that 50 percent of network providers are in the system 120 days prior to the start of health care delivery. Given that both TRICARE Select and the new TRICARE contracts were implemented on January 1, 2018, it is too early to determine whether this approach will be sufficient to deal with any upcoming challenges and risks. Implementation Plan Does Not Reflect Current Approach for Establishing Access Standards Our review of element B—ensure access standards for appointments for health care that meet or exceed those of high-performing health care systems in the United States, as determined by the Secretary—noted that the approach described in the implementation plan differs from the approach that DOD intends to use. The implementation plan states that the access standards for TRICARE Select will mirror those of TRICARE Prime, DOD’s managed care option, and that DOD will continue to compare these standards with those of high-performing U.S. health care systems. However, DOD officials told us in interviews that the access standards for TRICARE Select will be developed by each managed care support contractor and approved by DOD. This approach is outlined in contract documents, which state that the contractors are required to develop access-to-care plans that detail how they will ensure access standards that meet or exceed those of high-performing health care systems in the United States. DOD officials told us that they did not intend to suggest in the plan that the TRICARE Prime access standards would be applied to TRICARE Select. Instead, these officials explained that they meant that the access standards for TRICARE Select would be evaluated with the same tools as the access standards for TRICARE Prime. DOD officials further stated that they did not include information about the managed care support contractors proposing their own access standards because they were still developing the approach to this element when the implementation plan was submitted. DOD officials told us they decided on this approach because there is no national model for preferred provider organization access standards, and therefore they did not want to be prescriptive about the access standards for this option. However, as a result of this approach, there is the potential that the managed care support contractors for the East and West regions could be using two different sets of access standards for TRICARE Select. Standards for internal control in the federal government state that management should externally communicate the necessary information to achieve the entity’s objectives. Because the implementation plan does not reflect DOD’s current approach, Congress may be lacking important information, including what responsibilities the contractors have in terms of providing access to care, impeding its ability to provide oversight. Conclusions On January 1, 2018, DOD implemented significant changes to the TRICARE program, which provides health care to millions of beneficiaries worldwide. One of these changes is the establishment of a new preferred provider option—TRICARE Select—intended to modernize the TRICARE benefit and improve beneficiaries’ access to care. While DOD’s implementation plan for this new option addressed all of the elements that were required, time constraints along with competing priorities impeded DOD’s ability to fully develop its approach for some elements, which are being addressed through other oversight efforts. Furthermore, although one of TRICARE Select’s primary goals is to improve access to care, DOD’s implementation plan does not reflect how access standards will be established. Without the most current information, it will be difficult for Congress to determine whether the department is achieving its mission of ensuring that beneficiaries receive the right level of care, at the right time, delivered by the right provider. Recommendation for Executive Action We recommend that the Secretary of Defense direct the Assistant Secretary of Defense (Health Affairs) to provide written documentation of DOD’s approach to developing and approving the TRICARE Select access standards, as well as the final access standards, to Congress. (Recommendation 1) Agency Comments We provided a draft of this report to DOD for comment. In its written comments, which are reproduced in Appendix II, DOD concurred with our recommendation. DOD stated that it will provide written documentation about the TRICARE Select access standards to Congress by June 30, 2018. DOD did not provide technical comments. We are sending copies of this report to the Secretary of Defense and appropriate congressional committees. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or at draperd@gao.gov. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. Other major contributors to this report are listed in appendix II. Appendix I: Leading Practices for Strategic Management Planning as Identified by GAO’s Prior Work This appendix provides additional information regarding six elements identified by our prior work as leading practices for strategic management planning to establish a comprehensive, results-oriented framework. (See table 3.) Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Bonnie Anderson (Assistant Director), Daniel Klabunde (Analyst-in-Charge), and Karen Belli made key contributions to this report. Also contributing were Jacquelyn Hamilton and Elizabeth T. Morrison.
Why GAO Did This Study DOD offers health care services to approximately 9.4 million eligible beneficiaries through TRICARE, DOD's regionally structured health care program. In each of its regions, DOD uses contractors to manage health care delivery through civilian provider networks, among other tasks. The NDAA 2017 made several changes to the TRICARE program, including the establishment of a new preferred provider network health plan option called TRICARE Select. The NDAA 2017 also required DOD to develop an implementation plan for TRICARE Select that addresses seven specific mandated elements on access to care, beneficiary complaints, and quality metrics for network providers. The NDAA 2017 included a provision for GAO to review the implementation plan. This report examines the extent to which DOD's implementation plan addressed the mandated elements. GAO evaluated DOD's implementation plan using leading planning practices identified in GAO's prior work and standards for internal control. GAO examined program policies, procedures, and contracts and interviewed DOD officials and TRICARE regional contractors. What GAO Found The Department of Defense's (DOD) TRICARE Select Implementation Plan addressed the seven specific elements mandated by the National Defense Authorization Act for Fiscal Year 2017 (NDAA 2017). These elements are Element A: ensuring that at least 85 percent of the TRICARE Select beneficiary population is covered by the network by January 1, 2018; Element B: ensuring access standards for health care appointments; Element C: establishing mechanisms for monitoring compliance with standards for access to care; Element D: establishing health care provider-to-beneficiary ratios; Element E: monitoring complaints by beneficiaries with respect to network adequacy and health care provider availability; Element F: establishing requirements for mechanisms to monitor the responses to complaints by beneficiaries; and Element G: establishing mechanisms to evaluate the quality metrics of the network providers. GAO also assessed the implementation plan against leading practices for sound strategic management planning and found that it incorporated many of the practices, such as establishing goals, strategies to achieve goals, and plans to assess progress. However, a few of the leading practices were only partially incorporated or not incorporated at all. For example, the implementation plan did not always fully address the leading practice that planning documents include strategies to achieve goals and plans to assess progress. DOD officials explained that some of the details of their approach to the elements had not been finalized when they were completing the implementation plan. These officials added that their approach to the implementation plan was to create a strategic overview, and that some of the details are contained in contract documents and monitored through their oversight responsibilities. Furthermore, GAO's assessment of the plan's elements found that the approach outlined in the implementation plan for ensuring access standards for health care appointments (Element B) is different from the approach DOD intends to use. The plan noted that DOD will use the access standards for TRICARE Prime—a managed care option—for TRICARE Select. However, DOD officials told GAO that the contractors are responsible for developing their own access standards, which DOD must approve. These officials added that DOD did not include information about the contractors proposing their own access standards because DOD was still developing its approach to this element when the plan was submitted. Because the implementation plan does not reflect DOD's current approach, Congress may not have the information it needs about the contractors' responsibilities for providing access to care, impeding its ability to provide oversight. What GAO Recommends GAO recommends that DOD provide written documentation of its approach for developing and approving the TRICARE Select access standards, as well as the final access standards, to Congress. DOD agreed with GAO's recommendation.
gao_GAO-18-506T
gao_GAO-18-506T_0
Background Under the Homeland Security Act of 2002, responsibility for the apprehension, temporary detention, transfer, and repatriation of unaccompanied children is delegated to DHS; and responsibility for coordinating and implementing the care and placement of unaccompanied children is delegated HHS’s Office of Refugee Resettlement (ORR). U.S. Customs and Border Protection’s U.S. Border Patrol and Office of Field Operations (OFO), as well as U.S. Immigration and Customs Enforcement (ICE), apprehend, process, temporarily detain, and care for unaccompanied children who enter the United States with no lawful immigration status. ICE’s Office of Enforcement and Removal Operations is generally responsible for transferring unaccompanied children, as appropriate, to ORR, or repatriating them to their countries of nationality or last habitual residence. Under the William Wilberforce Trafficking Victims Protection Reauthorization Act of 2008 (Trafficking Victims Protection Reauthorization Act), unaccompanied children in the custody of any federal department or agency, including DHS, must be transferred to ORR within 72 hours after determining that they are unaccompanied children, except in exceptional circumstances. ORR has cooperative agreements with residential care providers to house and care for unaccompanied children while they are in ORR custody. The aim is to provide housing and care in the least restrictive environment commensurate with the children’s safety and emotional and physical needs. In addition, these residential care providers, referred to here as grantees, are also responsible for identifying and assessing the suitability of potential sponsors—generally a parent or other relative in the country—who can care for the child after they leave ORR custody. To do this, grantees collect information from potential sponsors and run various background checks. In cases in which there are questions about the ability of the sponsor to meet the child’s needs and provide a safe environment, and for children included in specified categories under the Trafficking Victims Protection Reauthorization Act, a home study is also conducted. In certain circumstances ORR may also arrange for post- release services for the child. Release to a sponsor does not grant unaccompanied children legal immigration status. Children are scheduled for removal proceedings in immigration courts to determine whether they will be ordered removed from the United States or granted immigration relief. There are several types of immigration relief that may be available to these children, for example, asylum or Special Immigrant Juvenile status. A Joint Collaborative Process for the Referral and Placement of Unaccompanied Children Has Not Yet Been Implemented In response to a recommendation in our 2015 report, DHS and HHS have agreed to establish a joint collaborative process for the referral and transfer of unaccompanied children from DHS to ORR shelters, but the process has not yet been implemented. It will be important to ensure that, once implemented, this process has clearly defined roles and responsibilities for each agency, as we recommended. In 2015, we reported that the interagency process to refer unaccompanied children from DHS to ORR shelters was inefficient and vulnerable to error. For example, as of April 2015, DHS and ORR relied on e-mail communication and manual data entry to coordinate the transfer of unaccompanied children to shelters because each agency used its own data system and these systems did not automatically communicate information with one another. These modes of communication made the referral and placement process vulnerable to error and possible delay in the transfer of these children from DHS to ORR. Each DHS component also submitted shelter requests to ORR in a different way. We reported that the roles and responsibilities of DHS components were not consistent during the referral and placement process, and DHS points of contact for ORR varied across Border Patrol sectors and ICE and OFO areas of operation. Further, we noted that the inefficiencies in the placement process for unaccompanied children had been a long-standing challenge for DHS and ORR. Therefore, we recommended that DHS and HHS jointly develop and implement a documented interagency process with clearly defined roles and responsibilities, as well as procedures to disseminate placement decisions, for all agencies involved in referring and placing unaccompanied children in ORR shelters. The agencies agreed with this recommendation and in response, DHS and HHS finalized a memorandum of agreement (MOA) in February 2016. The MOA provides a framework for coordinating each agency’s responsibilities and establishing procedures, shared goals, and interagency cooperation with respect to unaccompanied children. The MOA states that DHS and HHS agree to establish a joint concept of operations. According to the MOA, this joint concept is to include, among other things, standard protocols for consistent interagency cooperation on the care, processing, and transport of these children during both steady state operations, as well as in the event the number of unaccompanied children exceeds standard capabilities and existing resources. In February 2018, HHS officials told us that the agency is reviewing a draft of the DHS-HHS joint concept of operations. To fully address our recommendation, DHS and HHS will need to ensure that this joint concept, once finalized and implemented, includes a documented interagency process with clearly defined roles and responsibilities, as well as procedures to disseminate placement decisions, as we recommended. ORR Reports Taking Steps to Improve Monitoring of Grantees’ Provision of Services In response to a recommendation in our 2016 report, ORR reported taking steps to improve monitoring of its grantees, including reviewing its monitoring protocols and ensuring all grantees were monitored over a 2- year period. These steps should increase the timeliness, completeness, and consistency of ORR’s monitoring; however, ORR needs to ensure that its updated processes and protocols are fully implemented and in use. In 2016, we reported that ORR relies on grantees to provide care for unaccompanied children, such as housing and educational, medical, and therapeutic services, and to document in children’s case files the services they provide. Grantees are required to provide these services and to document that they did so. However, in our 2016 report, we found that documents—such as legal presentation acknowledgment forms, records of group counseling sessions, or clinical progress notes—were often missing from the 27 randomly selected case files we reviewed. In addition, we identified several cases in which forms that were present in the files were not signed or dated. We found that although ORR used its web-based data system to track some information about the services children received, and grantees reported on the services they provided in their annual reports, the documents contained in case files were the primary source of information about the services provided to individual children. We concluded that without including all of the documents in case files, it was difficult for ORR to verify that required services were actually provided in accordance with ORR policy and cooperative agreements. In our 2016 report, we noted that ORR’s most comprehensive monitoring of grantees occurred during on-site visits, but that onsite visits to facilities were inconsistent. Prior to fiscal year 2014, project officers were supposed to conduct on-site monitoring of facilities at least once a year. However, we found in our review of agency data that many facilities had not received a monitoring visit for several years. For example, ORR had not visited 15 facilities for as many as 7 years. In 2014, ORR revised its on-site monitoring program to ensure better coverage of grantees and implemented a biennial on-site monitoring schedule. Nevertheless, ORR did not meet its goal to visit all of its facilities by the end of fiscal year 2015, citing lack of resources. In our 2016 report, we concluded that without consistently monitoring its grantees, ORR cannot know whether they were complying with their agreements and that children were receiving needed services. We recommended that the Secretary of HHS direct ORR to review its monitoring program to ensure that onsite visits are conducted in a timely manner, case files are systematically reviewed as part of or separate from onsite visits, and that grantees properly document the services they provide to children. HHS concurred with this recommendation and stated that it had created a new monitoring initiative workgroup to examine opportunities for further improvement. Since our 2016 report, ORR has reported achieving more timely and complete monitoring. In May 2017, ORR issued a summary of its fiscal year 2016 monitoring showing that monitoring of all of its 88 grantees was completed over the 2-year period of fiscal years 2015 and 2016. As a result of this monitoring, the agency reported issuing 786 corrective actions, almost all of which were closed within 90 days. The most common corrective actions were related to incomplete case file documentation and inconsistent implementation of some of ORR policies and procedures, according to ORR. Subsequently, for the 2-year period of fiscal years 2017 and 2018, ORR reported that as of April 2018, it had completed monitoring of 65 grantees and planned to complete monitoring of all of its remaining 39 grantees by the end of the fiscal year. In addition, ORR has reported that it is taking steps to ensure its monitoring processes and protocols are more systematic and uniform. During 2016, ORR announced a new Monitoring Initiative with the goal of establishing a comprehensive system of monitoring for all ORR-funded programs; HHS reported that it had conducted three trainings for ORR Project Officers and was in the process of adding two to three additional Project Officer positions to the unaccompanied children program. In April 2018, HHS reported that ORR was in the process of reviewing and revising its monitoring tools, and planned to have final versions of these tools completed by the end of fiscal year 2018. Once ORR completes its review of its monitoring tools and fully implements its revised protocols, these steps, along with its more timely monitoring, should help ensure an improved monitoring program. ORR Relies on Grantees to Identify and Screen Sponsors for Unaccompanied Children In 2016, we reported that ORR grantees that provide day-to-day care of unaccompanied children are responsible for identifying and screening sponsors prior to releasing children to them. During children’s initial intake process, case managers ask them about potential sponsors with whom they hope to reunite. Within 24 hours of identifying potential sponsors, case managers are required to send them a Family Reunification Application to complete. The application includes questions about the sponsor and other people living in the sponsor’s home, including whether anyone in the household has a contagious disease or criminal history. Additionally, the application asks for information about who will care for the child if the sponsor is required to leave the United States or becomes unable to provide care. Grantees also ask potential sponsors to provide documents to establish their identity and relationship to the child, and they conduct background checks. The types of background checks conducted depend on the sponsor’s relationship to the child (see table 1). In certain circumstances prescribed by the Trafficking Victims Protection Reauthorization Act or ORR policy, a home study must also be conducted before the child is released to the sponsor. Additionally, other household members are also subjected to background checks in certain situations, such as when a documented risk to the safety of the unaccompanied child is identified, the child is especially vulnerable, and/or the case is being referred for a mandatory home study. In our 2016 report, we found that between January 7, 2014, and April 17, 2015, nearly 52,000 children from El Salvador, Guatemala, or Honduras were released to sponsors by ORR. Of these children, nearly 60 percent were released to a parent. Fewer than 9 percent of these children were released to a non-familial sponsor, such as a family friend, and less than 1 percent of these children were released to a sponsor with whom their family had no previous connection (see table 2). Historically, most of these unaccompanied children have been adolescents 14 to 17 years of age, but about a quarter of the children from these three countries in 2014 and early 2015 were younger. ORR Reports Creating New Data Collection Guidance on Post-Release Services In response to a recommendation in our 2016 report, ORR reported taking various steps to collect additional information on the services provided to unaccompanied children after they are released from ORR custody. We welcome this progress, but continue to believe that further steps are needed to fully address our recommendation. In 2016, we reported that limited information was available about post- release services provided to children and their sponsors. Post-release services include such things as guidance to the sponsor to ensure the safest environment possible for the child; assistance accessing legal, medical, mental health, and educational services for the child; and information on initiating steps to establish guardianship, if necessary. The Trafficking Victims Protection Reauthorization Act requires ORR to provide post-release services to children if a home study was conducted, and authorizes ORR to provide these services to some additional children. Our 2016 report noted that ORR was in a position to compile the data it collects on post-release services, and to share the data internally and externally with other federal and state agencies to help them better understand the circumstances these children face when they are released to their sponsors. ORR was already collecting some information from its post-release grantees on services provided to children after they left ORR custody, and its newly instituted well-being calls and National Call Center would allow it to collect additional information about these children. However, at the time, ORR did not have processes in place to ensure that all of these data were reliable, systematically collected, and compiled in summary form to provide useful information about this population for its use and for other government agencies, such as state child welfare services. As a result, in our 2016 report, we recommended that the Secretary of HHS direct ORR to develop a process to ensure all information collected through its existing post-release efforts are reliable and systematically collected, so that the information could be compiled in summary form to provide useful information to other entities internally and externally. HHS concurred and stated that ORR would implement an approved data collection process that would provide more systematic and standardized information on post-release services and that it would make this information available to other entities internally and externally. At the time of our 2016 study, a relatively small percentage of unaccompanied children who had left ORR custody were receiving post- release services. Officials said ORR’s responsibility typically ended after it transferred custody of children to their sponsors. We found that slightly less than 10 percent of unaccompanied children received post-release services in fiscal year 2014, including those for whom a home study was conducted. However, the percentage of unaccompanied youth receiving post-release services has increased in recent years. According to publicly available ORR data, approximately 31 percent of unaccompanied youth received such services in fiscal year 2015, 20 percent in fiscal year 2016, and 32 percent in fiscal year 2017. In addition, during 2015, ORR had taken steps to expand eligibility criteria for post-release services. According to ORR officials, these changes included making all children released to a non-relative or distant relative eligible for such services. ORR also began operating a National Call Center help-line in May 2015. Children who contacted ORR’s National Call Center within 180 days of release and who reported experiencing (or being at risk of experiencing) a placement disruption, also became eligible for post-release services, according to ORR officials. Additionally, our 2016 report noted that in August 2015, ORR had instituted a new policy requiring grantee facility staff to place follow-up calls, referred to as Safety and Well Being follow-up calls, to all children and their sponsors 30 days after the children are placed to determine whether they were still living with their sponsors, enrolled in or attending school, and aware of upcoming removal proceedings, and to ensure that they were safe. ORR’s policy required grantees to attempt to contact the sponsor and child at least three times. In August 2017, ORR told us that the agency had created new guidance on case reporting, records management, retention, and information- sharing requirements for post-release service provider, and that it had collected data on Safety and Well Being follow-up calls that had been made to children and their sponsors. For example, ORR told us that during the first quarter of fiscal year 2016, its grantees reached 87 percent of unaccompanied children and 90 percent of sponsors by phone within 30 to 37 days after the child’s release from ORR care. In the second quarter of fiscal year 2016, these figures were 80 percent and 88 percent, respectively. ORR also said that the agency had developed a plan for collecting and analyzing National Call Center data. However, as of April 2018, ORR officials noted that case management functionality had not yet been built into ORR’s web-based portal. Further, ORR officials told us that the agency planned to create uniform data collection reporting forms for grantees providing post-release services, but as of April 2018, it had not developed these forms. ORR’s steps represent progress towards systematically collecting information that can be used internally and shared, as appropriate, with external agencies; however, to ensure our recommendation is fully addressed, ORR will need to complete its data collection and reporting efforts. With respect to unaccompanied children’s immigration proceedings, we reported in 2016 that several different outcomes are possible, and that the outcomes for many children had not yet been determined. An unaccompanied child who is in removal proceedings can apply for various types of lawful immigration status with DHS’s U.S. Citizenship and Immigration Services (USCIS), including asylum and Special Immigration Juvenile status. Alternatively, an unaccompanied child who has not sought, or has not been granted, certain immigration benefits within the jurisdiction of USCIS, may still have various forms of relief available to him or her during immigration proceedings. For example, an immigration judge may order the child removed from the United States, close the case administratively, terminate the case, allow the child to voluntary depart the United States, or grant the child relief or protection from removal. Moreover, a judge’s initial decision does not necessarily indicate the end of the removal proceedings. For example, cases that are administratively closed can be reopened, new charges may be filed in cases that are terminated, and children may appeal a removal order. In addition, in cases involving a child who receives a removal order in absentia, and a motion to reopen the child’s case has been properly filed, the child is granted a stay of removal pending a decision on the motion by the immigration judge. In our 2016 report, we found that according to ICE data on final removal orders from fiscal year 2010 through August 15, 2015, ICE removed 10,766 unaccompanied children, and about 63 percent of these children (6,751) were from El Salvador, Guatemala, or Honduras. Chairman Portman, Ranking Member Carper, and Members of the Subcommittee, this concludes my prepared remarks. I would be happy to answer any questions that you may have. GAO Contacts and Staff Acknowledgments For further information regarding this testimony, please contact Kathryn A. Larin at (202) 512-7215 or larink@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony include Margie K. Shields (Assistant Director), David Barish (Analyst-in-Charge), James Bennett, Kathryn Bernet, Ramona Burton, Rebecca Gambler, Theresa Lo, Jean McSween, James Rebbe, Almeta Spencer, Kate van Gelder, and James Whitcomb. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study ORR is responsible for coordinating and implementing the care and placement of unaccompanied children—that is, children who enter the United States with no lawful immigration status. The number of these children taken into custody by DHS and placed in ORR's care rose from about 6,600 in fiscal year 2011 to nearly 57,500 in fiscal year 2014, many coming from Central America. Though declining somewhat, the number has remained well above historical levels. In fiscal year 2017, DHS referred 40,810 such children to ORR. This testimony discusses efforts by DHS and HHS to improve the placement and care of unaccompanied children in four key areas: (1) the process by which unaccompanied children are transferred from DHS to ORR custody; (2) how ORR monitors the care of unaccompanied children in its custody; (3) how ORR identifies and screens sponsors before children are transferred to their care; and (4) what is known about services these children receive after they leave ORR custody. This testimony is based primarily on the findings from two prior GAO reports: a 2015 report on actions needed to ensure unaccompanied children receive required care in DHS custody; and a 2016 report on further actions HHS could take to monitor their care. This testimony also includes updated information on the progress agencies have made in implementing GAO's recommendations, and more recent statistics from publicly available sources. What GAO Found The Department of Homeland Security (DHS) and Department of Health and Human Services (HHS) have agreed to establish a joint collaborative process for the referral and placement of unaccompanied children, but the process has not yet been implemented. In 2015, GAO reported that the interagency process for referring unaccompanied children from DHS to HHS's Office of Refugee Resettlement (ORR) shelters was inefficient and vulnerable to error, and that each agency's role and responsibilities were unclear. GAO recommended that DHS and HHS jointly develop and implement an interagency process with clearly defined roles and responsibilities, as well as procedures to disseminate placement decisions, for all agencies involved. In February 2018, HHS officials told GAO that the agency was reviewing a draft of the DHS-HHS joint concept of operations. ORR has reported taking steps to improve monitoring of grantees that provided services to unaccompanied children. In 2016, GAO reported that ORR relied on grantees to document and annually report on the care they provide for unaccompanied children, such as housing and educational, medical, and therapeutic services, but documents were often missing and ORR was not able to complete all of its planned visits. GAO recommended that ORR review its monitoring program to ensure that onsite visits are conducted in a timely manner, that case files are systematically reviewed, and that grantees properly document the services they provide. Since 2016, ORR has reported that its grantee monitoring has improved, with more timely completion of on-site monitoring of all its grantees. ORR relies on grantees to identify and screen sponsors before placing children with them. In 2016, GAO reported that most unaccompanied children from certain Central American countries were released to a parent or other relative, in accordance with ORR policy (see figure). Sponsors' Relationship to Unaccompanied Children from El Salvador, Guatemala, and Honduras (Released from Custody from January 7, 2014, through April 17, 2015) In 2016, GAO reported that limited information was available on the services provided to children after they leave ORR care, and recommended that HHS develop processes to ensure its post-release activities provide reliable and useful summary data. Subsequent data from ORR indicate that the percentage of children receiving these services has increased, from about 10 percent in fiscal year 2014, to about 32 percent in fiscal year 2017. Also, in August 2017, ORR officials said that new case reporting requirements had been added to ORR's policy guide; however, further steps are needed to ensure the systematic collection of these data to provide useful information on post-release services across agencies, as GAO recommended.
gao_GAO-18-326
gao_GAO-18-326_0
Background DOD’s organizational structure includes the Office of the Secretary of Defense, the Joint Chiefs of Staff, the military departments, numerous defense agencies and field activities, and various unified combatant commands that contribute to the oversight of DOD’s acquisition programs. Prior to February 2018, the former Under Secretary of Defense for Acquisition, Technology, and Logistics also served as the principal acquisition official of the department and was the acquisition advisor to the Secretary of Defense. The former Under Secretary also served as the Defense Acquisition Executive and was the official responsible for supervising the acquisition of MAIS programs. The former Under Secretary’s authority included directing the military services and defense agencies on acquisition matters and making milestone decisions for MAIS and other programs. This official also had policy and procedural authority for the defense acquisition system, which establishes the steps that DOD programs generally take to plan, design, acquire, deploy, operate, and maintain the department’s information systems. However, as of February 2018, the department changed the way it conducts business and operations with the statutory elimination of the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics. The statute contains a provision that required DOD to establish a new Office of the Under Secretary of Defense for Research and Engineering to be responsible for driving innovation and acceleration of the advancement of warfighting capability. In addition, a new Office of the Under Secretary of Defense for Acquisition and Sustainment was created to focus on delivering proven technology more quickly. The creation of these offices within the department is intended to shift the principal focus of the Office of the Secretary of Defense from a role of program oversight to that of directing major department investments. Further, the statutory creation of a Chief Management Officer to replace the former Deputy Chief Management Officer is intended to improve the quality and productivity of the department’s business operations. DOD’s Acquisition Guidance and Framework for Managing MAIS Acquisitions In January 2015, DOD updated its guidelines that outline the framework for MAIS programs. This framework consists of six models for acquiring and deploying a program, including two hybrid models that each describe how a program may be structured based on the type of product being acquired (e.g., software-intensive programs and hardware-intensive programs). A generic acquisition model that shows all of the program life- cycle phases and key decision points is depicted in figure 1 and described below. Materiel solution analysis: Refine the initial system solution (concept) and create a strategy for acquiring the solution. A decision—referred to as Milestone A—is made at the end of this phase to authorize entry into the technology maturation and risk reduction phase. Technology maturation and risk: Determine the preferred technology solution and validate that it is affordable, satisfies program requirements, and has acceptable technical risk. A decision—referred to as Milestone B—is made at the end of this phase to authorize entry of the program into the engineering and manufacturing development phase and award development contracts. An acquisition program baseline is first established at the Milestone B decision point. A program’s first acquisition program baseline contains the original life-cycle cost estimate (which includes acquisition and operations and maintenance costs), the schedule estimate (which consists of major milestones and decision points), and performance parameters that were approved for that program by the milestone decision authority. The first baseline is established after the program has refined user requirements and identified the most appropriate technology solution that demonstrates that it can meet users’ needs. Engineering and manufacturing development: Develop a system and demonstrate through testing that the system meets all program requirements. A decision—referred to as Milestone C—is made during this phase to authorize entry of the system into the production and deployment phase or into limited deployment in support of operational testing. Production and deployment: Achieve an operational capability that meets program requirements, as verified through independent operational tests and evaluation, and implement the system at all applicable locations. Operations and support: Operationally sustain the system in the most cost-effective manner over its life cycle. Leading Practices for Managing IT Investments and Acquisition Programs We have developed and identified leading practices for governing IT investments to help guide organizations to better manage and oversee their projects. GAO’s Information Technology Investment Management guide states that good performance data and stakeholder oversight are elements that can lead to positive outcomes, such as helping to ensure a project is keeping to its initial cost, schedule, and performance goals. The guide also states that projects should be reviewed at regular intervals to monitor performance so that stakeholders can be aware of and review any differences between actual outcomes and goals. In addition, we and other entities, such as the Software Engineering Institute at Carnegie Mellon University, have identified leading practices to help guide organizations to effectively plan and manage their acquisitions of major IT systems. Our prior reviews have shown that proper implementation of such practices can significantly increase the likelihood of delivering promised system capabilities on time and within budget. These practices include, but are not limited to: Requirements management: Requirements establish what the system is to do, how well it is to do it, and how it is to interact with other systems. Appropriate requirements management involves eliciting and developing customer and stakeholder requirements, and analyzing them to ensure that they will meet users’ needs and expectations. It also consists of validating requirements as the system is being developed to ensure that the final systems to be deployed will perform as intended in an operational environment. Risk management: Risk management is a process for anticipating problems and developing plans to take appropriate steps to mitigate risks and minimize their impact on program commitments. It involves identifying and documenting risks, categorizing them based on their estimated impact, prioritizing them, developing risk mitigation strategies, and tracking progress in executing the strategies. DOD’s Policies for Managing MAIS Programs Do Not Always Adhere to Leading IT Management Practices According to GAO’s Information Technology Investment Management guide, leading practices for managing IT projects include: instituting the investment board, which is the process for creating and defining the membership, guiding policies, operations, roles, responsibilities, and authorities within the organization; identifying decision authorities for making important acquisition decisions; providing oversight whereby the organization monitors each project on its performance progress (e.g., establishing and tracking baseline estimates on cost and schedule goals, and thresholds to identify high risk on cost and schedule); and capturing and providing performance information about a particular investment (project) to decision makers at regular intervals (e.g., quarterly and annually). To align MAIS programs with the functions they perform, DOD recently made changes in how it characterizes its MAIS programs and, as a result, different programs must follow different management policies. Specifically, in April 2017, DOD identified 10 of 34 total MAIS programs as business programs and the Director, Acquisition Resources and Analysis announced that these programs would adhere to DOD’s Instruction 5000.75 policy for management and oversight. Further, in November 2017, the former Under Secretary of Defense for Acquisition, Technology, and Logistics announced that non-business MAIS programs would adhere to DOD’s Instruction 5000.02 policy for management and oversight. However, the policies used for MAIS business programs are not consistent in their adherence to leading IT management practices. For example, while the policy for non-business MAIS programs is consistent in its adherence to all four of the leading IT management practices, the policy for MAIS business programs is consistent in its adherence to only two of the four practices. Table 1 shows our analysis of DOD’s policies for non-business MAIS programs and MAIS business program and their adherence to the leading IT management practices. As shown in the table, DOD’s policy for non-business MAIS programs adheres to all four leading IT management practices. For example, the policy requires non-business MAIS programs to report the status of each program’s cost, schedule, and technical performance information quarterly and annually. The policy also designates specific decision makers who are responsible for monitoring and overseeing the progress of non-business system MAIS programs. Further, the policy requires each program to establish and report their initial baseline estimates and current estimates on cost and schedule so their performance can be tracked and monitored. In addition, to identify when programs may be at risk of significant cost or schedule increases, the policy requires programs to predetermine cost and schedule threshold estimates as an early warning indicator on when programs reach the point where they are at increased risk. In contrast, DOD’s policy for MAIS business programs policy only adheres to two of the four practices. Specifically, the policy adheres to the practice of instituting an investment board with processes for creating and defining the membership, policies, operations, roles, responsibilities, and authorities within the organization. In addition, the policy identifies decision authorities for making important executive-level acquisition decisions. However, the policy does not specify the establishment of initial and current baseline estimates on cost and schedule, and does not specify the reporting of threshold cost and schedule estimates to identify the point when programs may be at high risk. In addition, the policy does not adhere to leading practices requiring the periodic (quarterly and annual) reporting of performance information to stakeholders. To help address the need for improved guidance, the former Under Secretary of Defense for Acquisition, Technology and Logistics established a cross-functional team that is to examine the future of non- business MAIS programs and MAIS business programs from a policy, organization, management, and reporting perspective. The team was expected to provide its recommendations to the Under Secretary of Defense for Acquisition and Sustainment by March 15, 2018. However, because no final decisions had been made by the Under Secretary as of that date, it is unclear what specific actions the department will take regarding its policy recommendations, among other recommendations, to improve the management of non-business MAIS programs and MAIS business programs. Until DOD updates its policy for MAIS business programs to adhere to leading practices on the establishment of baseline estimates on cost and schedule to include threshold estimates on cost and schedule to identify when programs may be at high risk, stakeholders may not have the information they need to manage and oversee MAIS business programs. Further, unless the department updates its policy for MAIS business programs to adhere to the leading practice for periodically (quarterly and annually) reporting essential performance information, stakeholders may not have the information they need to make informed decisions for managing and overseeing MAIS business programs. All Selected MAIS Programs Had Changes in Cost and Schedule Estimates, and Most Programs Had Met Performance Targets All of the 15 selected MAIS programs had either increased or decreased their planned cost estimates, and 10 of them had delays in their planned schedule estimates when comparing the first acquisition program baseline to the most recent acquisition program baseline estimates. The changes in the cost estimates ranged from a decrease of $1.6 billion (-41 percent) to an increase of $1.5 billion (163 percent), and slippages in the schedule estimates ranged from a delay of 5 years to a delay of 5 months. Further, 9 of the 15 selected programs had conducted testing in which we could report on the number of performance targets met for each program. Of those 9, 6 programs reported that they had met all of their performance targets. The remaining 3 programs reported that they met several but not all performance targets. The following table shows the extent of changes in planned cost and schedule estimates for the selected MAIS programs since the first baseline estimate, as well as the number of performance targets met. All Selected MAIS Programs Had Either Increases or Decreases in Their Planned Cost All 15 selected MAIS programs had experienced increases or decreases in their planned cost estimates when comparing the initial, or first, baseline estimate to the current estimate. Specifically, 10 programs had decreases in their cost estimates that ranged from $1.2 million (less than -1 percent) for the Defense Agencies Initiative, Increment 2 program to $1.6 billion (-41 percent) for the Air Force’s Base Information Transport Infrastructure Wired program. Program officials reported that reductions in planned cost estimates were due to changes in program scope. Specifically, the reasons for reduction in cost include: Program scope changes. Officials for the Air Force’s Joint Space Operations Center Mission System Increment 2 program reported that its 12 percent cost decrease was due to a reduction in its estimate for operations and support that was changed from 20 years to 10 years. Officials for the Defense Information Systems Agency’s Global Combat Support System–Joint Increment 8 program reported that its 20 percent cost decrease was due to a reduction in the program’s scope for the number of development hours required to meet the logistics and operational needs. In addition, officials for the Defense Information Systems Agency’s Teleport Generation 3 program reported that its 22 percent cost decrease was due to a revised scope in terms of what is needed at the Milestone C decision point for low rate production. Design reconfiguration. Officials for the Air Force’s Base Information Transport Infrastructure Wired program reported that its 41 percent cost decrease was due to a reduction in the program’s scope when they changed from a base network system to a critical core configuration. In addition, 5 of the programs had experienced cost increases. These cost increases ranged from $2.9 million (less than 1 percent) for the Army’s Logistics Modernization Program Increment 2 to $1.5 billion (163 percent) for the Army’s Tactical Mission Command program. Program officials reported a variety of reasons for the increases in planned cost estimates. These reasons included the following: Underestimating schedule. Officials for the Air Force’s Defense Enterprise Accounting and Management System Increment 1 program attributed its 60 percent cost increase to underestimating the level of effort that was needed to develop the system within the estimated schedule. For example, the program did not account for software upgrades and, when this effort was added to the schedule to account for the work, the cost increased. Contractor issues. Officials from the National Security Agency’s Key Management Infrastructure Increment 2 program attributed its cost increase of 14 percent to schedule delays caused by the contractor and, as a result, increased funding at the Milestone C decision point. Underestimating development and test efforts. Officials from the Army’s Tactical Mission Command program attributed the cost increase of 163 percent to higher than expected costs to conduct research and developmental tests. Ten Selected MAIS Programs Had Delays in Their Planned Schedule Estimates Ten of the 15 selected MAIS programs had experienced changes in their planned schedule estimates, and 5 programs had no changes to their schedule estimates. The changes consisted of schedule slippages that ranged from 5 months for both the Army’s Logistics Modernization Program Increment 2 and the Defense Health Agency’s Department of Defense Healthcare Management System program, to 5 years for the Defense Enterprise Accounting and Management System Increment 1 program. Program officials reported that delays in the planned schedule estimates were due to unplanned budget reductions or unrealistic expectations regarding project milestones. Specifically, the reasons for these schedule slippages included: Aggressive schedule, funding reduction, and contract issues. Officials for the Air Force’s Joint Space Operations Center Mission System Increment 2 program attributed its schedule slippage of 2 years and 11 months to funding reductions of $18.9 million in fiscal years 2013 and 2014. In addition, the officials noted that an aggressive schedule for a Milestone B decision, contracting issues in the earlier acquisition phase, and longer than expected time to obtain personnel had contributed to the slippage. Longer than expected time to reach deployment. Officials for the Air Force’s Defense Enterprise Accounting and Management System Increment 1 program reported that its schedule slippage of 5 years occurred because of a change in the approach to deliver the system in multiple increments, thereby increasing the amount of time it would take to reach the deployment decision milestone. Also, officials for the Defense Information Systems Agency’s Teleport Generation 3 program reported a 3-year and 2-month slip. This schedule delay was due to the program’s inability to develop the mobile user and system interface capability by the estimated deployment milestone. Further, program officials for the Navy’s Consolidated Afloat Networks and Enterprise Services program attributed its schedule slip of 2 years and 6 months to a longer than expected maintenance period for the test platform and to a lengthy budget approval process, resulting in a slippage in the deployment date. Unplanned procurement fund reduction. Officials for the Army’s Global Combat Support System-Army program reported that its schedule delay of 11 months was due, in part, to a $16 million dollar decrease to the fiscal year 2016 budget. This unplanned reduction in procurement fund affected their ability to field the system as originally planned. Contractor staffing issues. Officials for the National Security Agency’s Key Management Infrastructure Increment 2 program reported significant schedule delays due to the contractor’s inability to staff the program with software developers that had the required security clearances. As a result, a critical change was reported in January 2012 that led to a new independent cost estimate, which extended program development by 10 months. The new estimate included additional time to improve the governance structure, such as increasing discipline across the oversight process, adding more stakeholder interaction, and improving the use of metrics. Six of Nine MAIS Programs Had Met All Performance Targets Among other information, DOD uses key performance parameters as a metric to report on programs’ progress toward meeting system performance targets. This information includes a description of the performance characteristics, the objective and threshold value for each target and, importantly, whether the target has been met in demonstrating performance. Of the nine programs we evaluated, six programs reported that they met all of their performance targets. For example, the Navy’s Common Aviation Command and Control System, Increment 1 program reported in May 2017 that both of its technical performance targets had been met. According to the program, these targets were related to the readiness of the system to fully support all operational activities and satisfy all technical requirements for military operations and the fusion of all kinds of data onto any workstation. In another example, the Army’s Logistics Modernization Program Increment 2 program reported in June 2017 that all seven of its performance targets had been met. According to the program, these targets were related to the system’s ability to support military operations, exchange information in the network, provide system and information assurance in a disaster recovery scenario, and be operationally available. Further, three programs reported that they met several, but not all, of their performance targets. For example, the Navy’s Consolidated Afloat Networks and Enterprise Services program reported that it met eight of nine performance targets. According to program officials, the remaining target (i.e., network shall fully support joint critical operational activities) had not been met because the program lacked an operational platform that was required to demonstrate its performance. The Defense Information Systems Agency’s Teleport Generation 3 program reported that it met 8 of 12 performance targets. According to programs officials, the remaining 4 targets (i.e., coverage to allow warfighter communications, capacity to provide 100 percent of the required services, and interoperability with military and commercial frequencies and wave forms) had not been met because the program needed to field multiple systems and perform solution testing, which they expect to be completed in fiscal year 2018. Further, the Air Force’s Defense Enterprise Accounting and Management System Increment 1 program officials reported that it met 3 of 4 targets (i.e., compliance with requirements, network ready, and sustainment to ensure materiel availability). The officials reported that the program did not meet the remaining target because it was waiting for an evaluation of cyber test results before proceeding. Selected Programs Fully Implemented Most, but Not All, of the Leading Practices for Managing Requirements and Risk According to the Software Engineering Institute’s Capability Maturity Model Integration® for Acquisition (CMMI®-ACQ), an appropriate requirements management process involves establishing an agreed-upon set of requirements, ensuring traceability between requirements and work products, and managing any changes to the requirements in collaboration with stakeholders. Likewise, an effective risk management process identifies potential problems before they occur, so that risk-handling activities may be planned and invoked, as needed, across the life of the project in order to mitigate the potential for adverse impacts. Leading requirements management practices help organizations to better manage the design, development, and delivery of systems within established cost and schedule time frames. These practices include developing an understanding with the requirements providers of the meaning of the requirements, obtaining commitment to requirements from project participants, managing changes to requirements as they evolve during the project, maintaining bidirectional traceability among requirements and work, ensuring that project plans and work products remain aligned with requirements. An effective risk management process includes the following leading practices determining risk sources and categories; defining parameters used to analyze and categorize risks and to control the risk management effort; establishing and maintaining the strategy to be used for risk identifying and documenting risks; evaluating and categorizing each identified risk using defined risk categories and parameters, and determining its relative priority; developing a risk mitigation plan in accordance with the risk monitoring the status of each risk periodically and implementing the risk mitigation plan as appropriate. The three selected MAIS programs that we evaluated had fully implemented most, but not all, of the five leading practices for managing requirements and the seven leading practices for managing risks. Specifically, two of three programs implemented all of the requirements management practices, while one program implemented most, but not all, of the practices. Further, one of three programs implemented all of the risk management practices, while two programs implemented most, but not all of the practices. Table 3 shows the extent to which practices were implemented by the three selected programs. Two Programs Fully and One Program Partially Implemented Leading Practices for Managing Requirements Two of the three programs had fully implemented the requirements management practices. The other program had partially implemented two practices and fully implemented three practices. Navy — Navy Consolidated Afloat Networks and Enterprise Services The Navy had fully implemented the five requirements management practices for the Consolidated Afloat Networks and Enterprise Services program. For example, the program developed an understanding with requirements providers of the meaning of the requirements. Specifically, there was a plan for documenting, managing, and controlling changes to requirements throughout the system lifecycle. This plan served as the primary guidance for integrating the management of all specified and derived requirements for the Consolidated Afloat Networks and Enterprise Services system program. In addition, the program had established criteria for determining requirements providers. Specifically, roles and responsibilities for requirements management had been identified. Further, the program managed changes to requirements as they evolved during the project. For example, the program provided evidence that it maintains a requirements change history, including the rationale for changes. Defense Logistics Agency — Defense Agencies Initiative, Increment 2 The Defense Logistics Agency had fully implemented the five requirements management practices for the Defense Agencies Initiative, Increment 2. For example, the program had established objective criteria for the evaluation and acceptance of requirements. Specifically, there was a process in place to develop and finalize deliverables in support of the business requirements identified by the stakeholders, ensure that requirements management activities were performed in a timely manner throughout the life of the project, and review and approve requirements deliverables. Further, throughout the process, the requirements manager tracked requirements changes and maintained traceability of end user needs to the system performance specification. The Defense Health Agency had fully implemented three and partially implemented two of the five requirements management practices for the Defense Healthcare Management System Modernization program. For example, the program had established objective criteria for the evaluation and acceptance of requirements. Specifically, any new or updated requirements were presented to a Configuration Steering Board for review and approval prior to any changes being made. Further, throughout the process, the requirements manager tracked requirements changes and maintained traceability to ensure they were documented. The program has not developed an understanding with the requirements providers on the specific meaning of the requirements. For example, although the program had developed a requirements management plan which provided guidance in this area, according to program officials, the plan was not signed and approved based on the recent shift of the program from a non-business MAIS program to a MAIS business program operating under DOD Instruction 5000.75. Program officials stated that the requirements management plan is not expected to be complete until final guidance is provided by the Office of the Secretary. Regardless of this recent shift, the program should have already had an approved requirements management plan in place since program initiation. In the absence of an approved plan, the program lacks assurance that it can effectively communicate and manage requirements practices. Further, the program had not demonstrated that it identified any changes that should be made to plans and work products resulting from changes to the requirements baseline. Programs officials stated that efforts to review modifications to the plan due to requirements changes had not been conducted, but they expected the review and approval to be done at some future date. However, they could not provide a specific time frame. According to CMMI®-ACQ, until project plans and work products are updated to coincide with changes in requirements, the program will not be able to effectively identify inconsistencies between requirement changes and project plans and work products, and initiate corrective actions to resolve them. One Program Fully and Two Programs Partially Implemented Applicable Risk Management Leading Practices One program had fully implemented the risk management practices, while two had fully implemented all but one practice. Navy — Navy Consolidated Afloat Networks and Enterprise Services The Navy had fully implemented six and partially implemented one risk management practice for the Consolidated Afloat Networks and Enterprise Services system program. For example, the program’s risks defined consistent criteria for evaluating and quantifying risk likelihood and severity levels. Specifically, the program provided a risk exposure (e.g., a risk source used to examine and oversee changes that impact the project), which is the value that is given to a risk event, a product, or the overall program based on the analysis of the probability and consequences of the event. Further, the program’s Risk Management Guide outlined risk performance, cost, and schedule criteria. In addition, the program demonstrated that it included the cost and benefits of implementing risk mitigation plans. Specifically, a risk’s description provided the cost impacts associated with the risk, which in turn provided evidence that cost and benefits were considered during risk evaluation. However, the Navy partially implemented one practice. Specifically, although the program provided its failover/recovery plan that is intended to return the program to a state of readiness after a failure, the plan did not explicitly identify environmental elements. A program official stated that environmental factors, such as risks that could negatively affect their work, is understood, but these factors had not been documented in the plan. Further, the official stated that the program should update the plan accordingly, but did not provide a time frame to complete this effort. Until all potential issues, hazards, threats, and vulnerabilities that could negatively affect work efforts have been identified in the plan, successful risk management cannot be ensured. Defense Logistics Agency—Defense Agencies Initiative, Increment 2 The Defense Logistics Agency had fully implemented all seven risk management practices for the Defense Agencies Initiative, Increment 2. For example, the program identified program risks, including risk sources, categories, and stakeholders. In addition, Defense Agencies Initiative, Increment 2 risks followed consistent criteria for evaluating and quantifying risk likelihood and severity levels. Specifically, risk level was based on a combination of factors to include both likelihood and consequence. In all instances, consensus on the risk levels was required between the risk owner and the customer counterpart. Further, the program’s contingency plan provided guidance when outages fell into one of three disaster categories including natural disasters, man-made disasters, and technological disasters. Defense Health Agency—Defense Healthcare Management System Modernization The Defense Health Agency had fully implemented six and partially implemented one of the seven risk management practices for the Defense Healthcare Management System Modernization program. For example, the program’s risks followed consistent criteria for evaluating and quantifying risk likelihood and severity levels. Specifically, the program’s Risk and Issue Management Plan described how to assess the impact level in each risk area (performance, project and program schedules, and cost). Further, the program prioritized risks for mitigation. For example, risks were categorized and charted as low, medium, or high, and grouped accordingly in the program’s risk register. Further, the program’s Disaster Recovery Plan provides processes that allowed rapid support recovery for critical operations during a disaster, including environmental disasters such as tornadoes. Regarding the partially implemented practice, the program provided an example of a risk mitigation plan. However, the program indicated that costs and benefits were not quantified within the program-level risk mitigation plans. According to CMMI®-ACQ, risk mitigation activities should be examined for benefits they provide versus resources they will expend. Just like any other design activity, alternative plans may need to be developed and costs and benefits of each alternative assessed. However, the program does not require that costs and benefits be included as part of its risk mitigation planning efforts. As a result, the information for making an informed decision on cost and benefits of risk mitigation solutions is limited. Program officials did not indicate whether they have plans to implement this practice, and did not provide an explanation as to why they are unable to provide this information. Until the program quantifies costs and benefits, it will not be able to effectively select the most appropriate risk mitigation plan to address each risk. Conclusions While DOD’s policy for non-business MAIS programs adheres to all four leading IT management practices, the department’s policy for MAIS business programs does not adhere to two leading practices on establishing initial and current baseline estimates on cost and schedule and predetermining threshold estimates, as well as reporting periodically on performance information to stakeholders. Until DOD adheres to these practices in its policies that govern MAIS business programs, it cannot ensure that stakeholders will have the information they need to manage and oversee their investments. Following leading IT acquisition practices on requirements and risk management is essential to help programs effectively plan and direct their development and acquisition efforts. All of the leading IT acquisition practices for requirements and risk management had been fully or partially implemented by three programs that we reviewed. However, the Defense Health Agency’s Defense Healthcare Management System Modernization has not finalized its requirements management plan nor has it identified changes that should be made to plans and work products resulting from changes to the requirements baseline. Until the program addresses these practices, it will lack a comprehensive plan for managing its requirements and it may not be able to effectively identify inconsistencies and initiate corrective actions. Further, the Navy Consolidated Afloat Networks and Enterprise Services program did not fully identify and document risks that could negatively affect work efforts. In addition, the Defense Health Agency’s Defense Healthcare Management System Modernization did not quantify costs and benefits of risk mitigation within its program-level risk mitigation plans. As a result, successful risk management for avoiding, reducing, and controlling the probability of risk occurrence cannot be ensured. Recommendations for Executive Action We are making the following three recommendations to the Secretary of Defense to direct: The Under Secretary of Defense for Acquisition and Sustainment to update the policy or guidance for MAIS business programs. Specifically, the update should include the following elements: establishment of initial and current baseline cost and schedule predetermined threshold cost and schedule estimates to identify the point when programs may be at high risk, and quarterly and annual reports on the performance of programs to stakeholders. (Recommendation 1) The Director of the Defense Health Agency to direct the program manager for the Defense Healthcare Management System Modernization program to: finalize and approve its requirements management plan, identify and document changes that should be made to plans and work products resulting from changes to the requirements baseline, and quantify costs and benefits of risk mitigation within its program- level risk mitigation plans. (Recommendation 2) The Secretary of the Navy to direct the program manager for the Navy Consolidated Afloat Networks and Enterprise Services program to: identify and document, in the failover/recovery plan, all potential external environmental issues, such as hazards, threats, and vulnerabilities that could negatively affect work efforts. (Recommendation 3) Agency Comments and Our Evaluation DOD provided written comments on a draft of this report, which are reproduced in appendix II. In its comments, the department partially concurred with our first recommendation and concurred with the two other recommendations. DOD partially concurred with the first recommendation on updating the policy or guidance for MAIS business programs. Specifically, the Under Secretary of Defense for Acquisition and Sustainment stated that regarding establishing baselines, the DOD Instruction 5000.75 requires establishment of cost, schedule, and performance parameters for each release before development or delivery. The 5000.75 also requires consideration of program progress against baselined cost, schedule, and performance as a criterion at the limited deployment and full deployment decision points. A baseline requirement thus exists in DOD Instruction 5000.75 but it is not described as an acquisition program baseline, which may be familiar to readers of DOD Instruction 5000.02. However, the Under Secretary added that the Army’s implementation guidance includes guidance that states each increment must have an acquisition program baseline with its own set of threshold and objective values set by the user. While we agree that the existing policy requires such parameters to be captured and included in the department’s decision making process, we found the policy to be vague in its discussion of these parameters and to not clearly define what a baseline is, or which baselines are to be used or reported for comparison purposes. For example, the policy does not make a distinction between the initial acquisition program baseline, current baseline, and baseline deviations. Yet, such information is important because it provides a basis for decision makers to identify the extent to which a program may have deviated from its initial cost, schedule, or technical performance baseline. By making these distinctions in the policy, the department’s policy for its MAIS business programs will be more consistent with its other policy for non-business MAIS programs with regard to the way an acquisition program baseline is defined and the elements that should be captured and reported to its decision makers. In turn, the program managers who prepare these reports and the decision makers who rely on them will have information that is consistently and succinctly prepared for making credible decisions. Regarding adding provisions in its policy for the establishment of predetermined thresholds, the Under Secretary stated that the 5000.75 states that the milestone decision authority is responsible for delivery within cost, schedule, and performance parameters, and the milestone decision authority is to do this by establishing oversight controls for programs, including procedures to report and address variances. The Under Secretary added that the 5000.75 does not suggest the practice of establishing a predetermined threshold for the variance, and DOD will consider the addition of this feature to the 5000.75 update. Finally, regarding providing periodic annual and quarterly reports to the department’s leadership, the Under Secretary stated that such a periodic report would add value only if there had been no recent communication of program status from the program office to the leadership or stakeholders communities. While such communication is expected to occur frequently, its regularity is not specified in current policy or guidance. The Under Secretary stated that DOD will consider adding a provision for a report to leadership and functional stakeholder if such communication has not occurred within the past 3 or 4 months. DOD concurred with the second and third recommendations related to the department’s implementation of selected IT management practices. Regarding the second recommendation, the Under Secretary of Defense for Acquisition and Sustainment agreed to direct the Defense Healthcare Management System Modernization program manager to update and approve the requirements management plan, identify and document changes to the requirements baseline, and quantify the costs and benefits in the risk mitigation plans. Further, regarding the third recommendation, the Secretary of the Navy agreed to direct the program manager to identify and document all potential external environmental issues that could negatively affect work efforts for the Navy’s Consolidated Afloat Networks and Enterprise Services program. By taking these steps, these programs should be better positioned to effectively identify inconsistencies in managing changes to their requirements, and be more responsive to the potential for environmental issues. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Secretaries of the Army, Navy, and Air Force; the Under Secretary of Defense for Acquisition and Sustainment; the Director of the Defense Health Agency; and other interested parties. This report also is available at no charge on the GAO website at http://www.gao.gov. Should you or your staffs have any questions on information discussed in this report, please contact me at (202) 512-4456 or harriscc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology The National Defense Authorization Act for Fiscal Year 2012 mandated that we select, assess, and report on selected major automated information systems (MAIS) programs annually through March 2018. GAO satisfied the statutory mandate by submitting a draft of this report to the congressional committees on March 29, 2018. This final version of the report is the sixth and last report in the series of annual mandated assessments. Our objectives were to: (1) assess the Department of Defense’s (DOD) policy for the management and oversight of MAIS programs; (2) describe the extent to which selected MAIS programs have changed their planned cost and schedule estimates and met performance targets; and (3) assess the extent to which selected MAIS programs have used leading information technology (IT) acquisition practices, including requirements and risk management. To address the first objective, we identified four leading IT management practices in GAO’s Information Technology Investment Management guide and compared DOD’s policy adherence to those practices. These leading practices are: instituting the investment board, which is the process for creating and defining the membership, guiding policies, operations, roles, responsibilities, and authorities within the organization; identifying decision authorities for making important acquisition decisions; providing oversight whereby the organization monitors each project on its performance progress (e.g., establishing and tracking baseline estimates on cost and schedule goals, and thresholds to identify high risk on cost and schedule); and capturing and providing performance information about a particular investment (project) to decision makers at regular intervals (e.g., quarterly and annually). We then compared DOD’s policies used to manage and oversee the department’s non-business MAIS programs and MAIS business programs against these leading IT management practices. The department’s policy documents for managing and overseeing non-business MAIS programs and MAIS business programs include the: Memorandum by the Under Secretary of Defense for Acquisition, Technology, and Logistics, dated November 17, 2017, regarding the regulatory response to the repeal of title 10, United States Code, Chapter 144A, Major Automated Information System Programs. Memorandum by the Under Secretary of Defense for Acquisition, Technology, and Logistics, dated April 24, 2017, regarding the transition of programs to business system categories. DOD Instruction 5000.75, Business Systems Requirements and Acquisition, effective February 2, 2017. DOD Instruction 5000.02, Operation of the Defense Acquisition System, effective February 2, 2017. We also interviewed an official from the former Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics, who was responsible for the development of plans and policy for the administration regarding the management and monitoring of non-business MAIS programs and MAIS business programs To address the second objective, we used DOD’s official list of 34 business and non-business MAIS programs, as of April 18, 2017, to establish a basis for selecting programs. Of the 34 programs, we selected the 15 business and non-business MAIS programs that met our criteria: programs must be unclassified and have an initial acquisition program baseline that could be used as a reference point for evaluating cost, schedule, and technical performance characteristics. We then collected and analyzed key documents, reports, and artifacts for each program and summarized the information on estimated cost, schedule, and technical performance goals, including their latest program status in meeting those estimated goals. Next, we analyzed and compared each selected program’s first acquisition program baseline cost estimate to the latest estimate to determine the extent to which planned program costs had changed. Specifically, we used the total life-cycle cost estimate and analyzed and compared them to the latest estimate to determine the extent to which planned program costs had changed. Similarly, to determine the extent to which these programs changed their planned schedule estimates, we compared each program’s first acquisition program baseline schedule to the latest schedule. To determine whether the selected programs met their performance targets, we analyzed each program’s self-identified system performance targets and compared them against actual system performance metrics and latest test reports. We also reviewed additional information on each program’s cost, schedule, and performance, including program documentation, such as DOD’s MAIS annual and quarterly reports, acquisition program baselines, system test reports, and our prior reports. We then aggregated and summarized the results of these analyses across the programs. To address the third objective, we started with the list of the 15 programs from the second objective as a basis for selecting three MAIS programs as case studies. We used a combination of the following criteria to select the MAIS programs to review. Programs used in a most recent MAIS review were eliminated from consideration. The program was not designated as classified. The program had a baseline. Based on these criteria, we chose the following systems: Navy Consolidated Afloat Networks and Enterprise Services; Defense Logistics Agency’s Defense Agencies Initiative, Increment 2; Defense Health Agency’s Defense Healthcare Management System Modernization. We then analyzed each selected program’s IT acquisition documentation and compared it to key requirements and risk management and leading practices—including Software Engineering Institute’s Capability Maturity Model® Integration for Acquisition (CMMI- ACQ) practices—to determine the extent to which the programs were implementing these practices. In particular, the requirements management practices we reviewed were: develop an understanding with the requirements providers on the meaning of the requirements, obtain commitment to requirements from project participants, manage changes to requirements as they evolve during the project, maintain bidirectional traceability among requirements and work, and ensure that project plans and work products remain aligned with requirements. Specifically, we analyzed program requirements documentation, including requirements management plans, requirements traceability matrices, requirements change forms, technical performance assessments, and requirements board meeting minutes. Additionally, we interviewed program officials to obtain additional information about their requirements management practices. The conclusions reached for this objective are not generalizable to the larger population of 34 business and non-business MAIS programs. We also reviewed the following risk management practices: determine risk sources and categories; define parameters used to analyze and categorize risks and to control the risk management effort; establish and maintain the strategy to be used for risk management; identify and document risks; evaluate and categorize each identified risk using defined risk categories and parameters, and determine its relative priority; develop a risk mitigation plan in accordance with the risk management strategy; and monitor the status of each risk periodically and implement the risk mitigation plan as appropriate. Specifically, we analyzed program risk documentation, including risk reports, risk-level assignments, risk management plans, risk mitigation plans, and risk board meeting minutes. Additionally, we interviewed program officials to obtain additional information about their risks and risk management practices. To assess the reliability of the data of these programs we used to support the findings in this report, we corroborated program office responses with relevant program documentation and interviews with agency officials. We found no data reliability issues and determined that the data used in this report were sufficiently reliable for our reporting purposes. We have also made appropriate attribution indicating the sources of the data. We conducted this performance audit from April 2017 to May 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact name above, the following staff also made key contributions to this report: Eric Winter (Assistant Director), John Ortiz (Analyst in Charge), Alex Bennett, Neha Bhatt, Chris Businsky, and Rebecca Eyler.
Why GAO Did This Study DOD's MAIS programs are intended to help the agency sustain its key operations. In April 2017, recognizing that MAIS programs met different mission needs, DOD categorized its MAIS programs into business and non-business systems. The National Defense Authorization Act for Fiscal Year 2012 includes a provision for GAO to select, assess, and report on DOD's MAIS programs annually through March 2018. GAO's objectives, among others, were to (1) assess DOD's policies for managing and overseeing MAIS programs and (2) describe the extent to which selected MAIS programs have changed their planned cost and schedule estimates and met technical performance goals. To address these objectives, GAO compared DOD's policies for managing and overseeing all 34 MAIS programs (24 non-business programs and 10 business programs) to leading IT management practices. GAO also compared 15 selected programs' initial cost, schedule, and performance baselines to their current acquisition program estimates. What GAO Found The strength of Department of Defense's (DOD) policies for managing and overseeing major automated information system (MAIS) programs varies. Specifically, the policy for managing 24 non-business MAIS programs adheres to leading information technology (IT) management practices, but the policy for managing 10 MAIS business programs does not always do so (see table). When DOD categorized 10 of the 34 MAIS programs as MAIS business programs, it also directed these programs to adhere to DOD's business systems policy (DOD Instruction 5000.75). However, the department directed those programs to use a policy for the management and oversight of MAIS business programs that was not fully comprehensive. Until DOD updates its business systems policy to address gaps in establishing performance information such as baseline estimates on program cost and schedule goals, identifying thresholds to identify high risk, and requiring periodic reports to be provided to stakeholders at regular intervals, stakeholders will likely not have all the information they need to manage and oversee MAIS business programs. While all 15 business and non-business MAIS programs had either increased or decreased their planned cost estimates and the majority had delays in their planned schedule estimates, the majority of the 9 programs that had performance targets met those performance goals. Specifically, the decreases and increases in cost estimates ranged from a decrease of $1.6 billion (-41 percent) to an increase of $1.5 billion (163 percent). The decreases in planned cost were largely due to scope reduction, while cost increases were due to underestimating levels of effort and contracting issues. The slippages in schedule estimates ranged from a delay of 5 years to 5 months; these delays were caused by unrealistic expectations or unplanned changes. Six of the 9 programs that had performance targets met all of them, while the other 3 met several but not all of their performance targets. The other 6 programs were in the early stages of system development and had not begun performance testing. What GAO Recommends GAO is making three recommendations, including that DOD update its policy for managing MAIS business programs to include baseline estimates. DOD partially concurred with this recommendation, and fully concurred with the other two recommendations. GAO continues to believe that all the recommendations are warranted.
gao_GAO-19-214
gao_GAO-19-214_0
Background The FMS program, which transfers defense articles and services to international partners and organizations, is essentially an acquisition process through which the U.S. government procures military equipment, training, and other services on behalf of foreign customers. Multiple organizations have a role in the FMS program. The Department of State has overall responsibility for the program, including approving what defense items and services can be sold to specific countries. DOD administers the FMS program and manages the procurements executed within the military departments on behalf of foreign governments. Within DOD, DSCA carries out key functions such as supporting development of policy for FMS. The military departments carry out the day-to-day implementation of FMS procurements which can include providing price and availability data at the customer’s request. Typically, defense items—such as weapon systems—made available for transfer or sale to foreign customers are systems that have completed operational testing and are entering or have entered full rate production. In addition, DOD also sells non-standard items, which are defined as items that DOD does not currently manage and may include items that (1) are commercially available, (2) DOD previously purchased and have since been retired, or (3) were purchased in a different configuration for DOD components. For example, a customer may express interest in buying tanks that DOD no longer buys for its own needs. A customer may also express interest in buying a tank that DOD currently procures but with a radio communications configuration that is different from what DOD uses. FMS Price and Availability Process A single DOD entity may not have full responsibility for all aspects of responding to a foreign customer’s request to purchase U.S. defense items and services. Under DSCA policy, FMS procurements must generally be managed at “no cost” or “no profit” to the U.S. government. DOD’s work related to developing price and availability data and other FMS operations is generally paid for through the administrative charges collected from foreign customers. Depending on the complexity of the customer’s request, coordination within and across DOD components may be necessary to obtain complete information on pricing and availability. DOD may also need to coordinate with defense contractors who ultimately develop and provide the equipment or services. The FMS process generally begins when a foreign government submits a letter of request to the Department of State or DOD to purchase defense articles or services. In the letter of request, the foreign customer may express interest in obtaining preliminary price and availability data for the capabilities it seeks. While DOD describes price and availability data as rough order of magnitude estimates, DSCA’s guidance does not define the precision of these estimates. According to DOD, FMS price and availability data are non-binding estimates for the defense items and services and are not intended to be budget-quality estimates. Requests for price and availability data can signal to DOD and defense contractors the potential for future sales. DOD and contractors may also draw upon these requests to forecast staffing needs and production line availability. DOD security cooperation organizations working in U.S. embassies around the world can assist potential customers with defining and refining their requirements prior to submitting a request for price and availability data. The security cooperation organizations engage in this early coordination to help customers articulate their capability needs. This early coordination also gives DOD components advance notice of upcoming requests so they can initiate technology security and foreign disclosure processes for the timely release of information. Requests for price and availability data represent an optional step in the process. Customers may forgo the price and availability process and instead submit a formal assistance request for a letter of offer and acceptance, which when signed by the customer and U.S. government becomes an executable FMS case. Figure 1 illustrates where the option to request price and availability occurs in the overall FMS process. DOD Is Reconsidering Options to Implement Recent Legislative Change for FMS Price and Availability Process The National Defense Authorization Act for Fiscal Year 2017 required DOD to establish a process for defense contractors to provide input on any differences regarding the appropriateness of government price and availability data prior to delivery of formal responses to customers. In response, DSCA issued a policy memorandum in October 2018 that was rescinded 2 months later due to concerns about the sensitivity of information to be shared with contractors. The policy memorandum had instructed DOD components to formally request rough order of magnitude estimates from the prime defense contractor if (1) the total value of the primary article or service requested exceeds $50 million, and (2) the customer has a preference for a non-competitive sole source acquisition or only a single source exists for the primary defense item. Additionally, the memorandum stated that DOD components will allow the prime contractor 5 business days to provide feedback on the appropriateness of the estimate for its items that is included in the price and availability response prior to the customer receiving this response. The memorandum had established a formal process to obtain contractor feedback and resolve issues that may occur, such as differences between the program office’s and prime contractors’ estimates, and emphasized the importance of being aware of program deadlines when following the process to coordinate with contractors. According to a DSCA official, this new policy would have helped alleviate industry concerns about how DOD incorporates estimates provided by industry to develop price and availability responses provided to foreign customers. However, according to DSCA officials, when implementing the process, DOD found that the potential level of detail and precision in price and availability estimates could provide an unfair competitive advantage to contractors coordinating with DOD on price and availability responses to foreign customers. As discussed in further detail later in the report, in some instances we found price and availability estimates DOD offered included more precise information than rough order of magnitude estimates. According to DSCA officials, such information could offer the contractor insight into the government’s pricing methodologies. DSCA subsequently rescinded the October 2018 policy memorandum. DSCA plans to conduct a 120-day review to reassess options to find a solution, if any, on what information can be shared with contractors to satisfy the legal requirement to obtain contractor input and feedback on price and availability estimates before DOD responds to customers. DOD Received about 3,000 Requests for Price and Availability Data over the Past 5 Years From fiscal years 2014 through 2018, DOD reported receiving 3,038 requests for price and availability data from foreign customers from 93 countries and the North Atlantic Treaty Organization. Foreign customer requests included services and items such as training and support services for weapon systems, missiles and ammunition, aircraft, and communication equipment. We found that most requests came from the same foreign customers. Specifically, 10 customers accounted for 56 percent of requests, with one customer accounting for 28 percent of all requests during the 5-year period within our review. Customers in the Indo-Pacific region accounted for the largest share of requests, as shown in figure 2. Among DOD components, the military departments—Army, Navy, and Air Force—received almost all price and availability requests, as shown in figure 3. The Army received slightly more requests over the 5-year period, closely followed by the Navy. Foreign customers we obtained information from noted that they request price and availability data to inform their acquisition strategy, obtain a sense of affordability, and for budget planning. For example, when considering potential acquisition strategies, some customers may request data for different options, variants, or quantities of similar items or services, resulting in multiple requests for price and availability data to inform a potential purchase. In cases when a customer is interested in procuring a specific item, the customer may request data to obtain information about prices and lead times to determine affordability. The customer may also request the data when considering whether to purchase from the United States or from foreign countries. Requesting price and availability data can also provide foreign customers with information on whether the U.S. government will make the requested defense item or service available for sale. While preliminary estimates are not an official acknowledgement that the item or service will be made available to the customer, the request can trigger a U.S. government review that includes application of policies that govern the release of certain technologies or systems and a discussion with the customer about the item or service. In some cases, customers can receive responses with partial information if some requested items are not available for release. DOD does not collect data on which customers’ requests for price and availability data resulted in a formal request to purchase defense items or services under FMS. Army security assistance officials told us it can take years between when price and availability data are provided and when a customer submits a request for a letter of offer and acceptance, if at all. For their part, customers we obtained information from noted that there may be numerous reasons for why they might choose not to pursue a potential sale. For example, the item or service could not be made available within a timeframe to meet their needs; the overall capability was not affordable; or price and availability estimates were higher than estimates from other foreign sources. The military departments do not consistently track information on the status of responses sent to foreign customers. We found the Navy and Army generally captured the status of a response in the system, identifying when a response is in development, has been sent to the customer, or has been canceled but, according to security assistance officials, this information may not be entered consistently. In addition, the Air Force does not generally update the status of a response in the system. Further, Air Force security assistance officials told us the department does not update data in the system to reflect that the Air Force provided price and availability data to the customer. According to DSCA and military department officials, there is no requirement that DOD components record when a response is sent to a customer. A DSCA official told us that DSCA does not have a specific need to monitor the status of price and availability responses, in part because these are not formal offers, and DOD prioritizes data collection for formal FMS cases— cases for which a signed agreement between the U.S. government and foreign customer is in place. DOD’s Guidance Allows for Flexibility in Developing Price and Availability Data and Reflects Leading Practices for Using Quality Information DSCA has established DOD-wide guidance—the Security Assistance Management Manual—for responding to foreign customers’ requests for information on defense items and services available for purchase through the FMS program. The manual includes some guidance on developing, documenting, and communicating price and availability data to foreign customers, but largely pertains to a customer’s request for a letter of offer and acceptance with the intent to buy. Security assistance officials from across the military departments told us they rely on the manual to guide their efforts throughout the price and availability process, and that DSCA’s guidance provides a framework for the process and is not always prescriptive, allowing military departments latitude in how they implement it. DSCA and military department officials we spoke with said that a flexible process is needed to account for various circumstances specific to each request. The price and availability process outlined in guidance and described by DSCA and military department officials involves input from numerous organizations within and external to DOD, as shown in figure 4. The guidance states the process should be completed within 45 days. Generally, we found that DSCA’s guidance reflected attributes conducive to using quality information as called for by federal internal control standards. For example, the standards call for agencies to define information requirements and obtain relevant data from reliable sources. DOD’s guidance reflects this, stating that price and availability data should serve as rough order of magnitude estimates of the cost and availability of defense items or services and are for rough-order planning purposes. The guidance also instructs officials to assess whether a foreign customer’s request contains the necessary information to develop price and availability data, such as the major item or service, quantity, anticipated delivery schedule, and other specifications; suggests that price and availability data also provide customers with information about costs for not only buying equipment but also the related operation and sustainment costs; assumes responses will include standard items—nonstandard items identifies relevant data sources that the military departments can consult to develop price and availability data, such as last contract award, stock price, or information from defense contractors; states that military departments and DSCA should use the Defense Security Assistance Management System to prepare responses to price and availability requests; suggests that data should be itemized by separating main equipment from training, technical publication, transportation costs, and other elements, as applicable; and states that responses should be developed and communicated to customers within 45 days from when DOD receives the request. In Selected Examples, DOD Included Comprehensive Data on Ownership Costs When Developing Price and Availability Responses When selling defense items and services to foreign customers, military department officials indicated that they strive to offer a complete and sustainable capability, referred to as the total package approach. Using this approach, DOD takes into account the related support, such as training, logistics, spare parts, warranties, contractor support, and other considerations necessary for operating and sustaining the defense items or services being purchased. The total package approach represents the initial and follow-on cost of owning and supporting the capability. For example, a DOD program official may develop a cost estimate for the capability, including several years of technical support for maintaining it. DOD may also provide a customer with cost estimates for maintaining the capability over the course of its expected lifetime. Specifically, in the five examples we reviewed, we found that DOD officials generally used a total package approach when developing price and availability data. For example, military department officials developed price and availability data that not only included the items and services requested by the customer, but also included rough order of magnitude estimates for additional costs to reflect the expected ownership costs. Ownership costs may include development, procurement, operation, and sustainment costs for the defense item, as part of a total package approach. The timeframe of ownership costs provided may vary. According to a DSCA official, ownership costs generally cover the first 2 years. In four of the five cases we reviewed, the customer requested a capability and, in response, the program office provided estimates for not only the equipment but also the support needed to achieve the desired capability ranging from one week of training to five years of technical support. For example, in one case, a customer requested data for a complex naval weapon system that they had not previously used. Navy program officials provided estimates for the system, spare parts, training, and other items as requested by the customer. Program officials also included estimates on radio navigation equipment and software that are essential for the system to function as intended, but were not part of the customer’s initial request. Officials stated that they included these additional costs to give the customer a comprehensive view of the costs to acquire, operate, and maintain the weapon system. In the fifth case, program officials told us they did not have to include training or support as this customer was replacing missiles in their inventory, previously purchased through FMS. However, in considering the foreign customer’s ownership costs, the officials said they included costs for containers for storing the missiles. For the selected examples, program officials obtained data from defense contractors and previous sales, adjusting estimates from data sources to ensure the price and availability estimate reflected what the customer could expect to pay for the item or service—initial and follow-on cost of owning and supporting the capability—if the customer decided to proceed with the purchase. Defense contractors responsible for providing data for four of the five examples told us they consider the quantity and specific requirements of the request, such as training, spares, and support; as well as inflation and anticipated production and delivery schedules in some cases. We found that for the selected examples program officials adjusted estimates from contractors and other data sources for a number of reasons, such as to account for potential changes in production schedules and adding program management support provided by the U.S. government to administer system upgrades. By accounting for these likely costs, program officials stated that they were providing the customer with estimates that would more closely reflect expected costs if the customer proceeded with the sale. For example: In two of the responses we reviewed for missiles and communication systems, Navy and Air Force officials increased contractors’ estimates, in part, to account for possible changes to production plans. In the Navy response, for example, program officials increased the contractor’s estimate for the missiles by approximately 14 percent. Officials told us this was to account for possible changes in the production schedule and quantity. Contractor representatives told us that their estimate was based on a specific number of missiles being produced in a certain production lot. Program officials told us that the customer would not likely have a signed agreement in place to receive missiles from that specific production lot. According to program officials, this means the price per missile could be higher than forecasted in the contractor’s initial estimate because there may be fewer quantities in production, resulting in fixed production costs spread among fewer missiles. In an Army response we reviewed for non-standard upgrades to several hundred tanks, the program official used estimates provided by the contractor to develop the price and availability data. These tank upgrades are considered non-standard because the U.S. government no longer uses these tanks. In light of this, the program official included costs for program management support provided by the U.S. government because he said the magnitude of the program would likely require an Army office to execute and manage the upgrades, which is projected to last up to 10 years. In an Air Force response we reviewed for a warning system, program officials considered historical data from similar DOD contracts. The program officials increased the price by $2.4 million dollars from past procurements based on the customer’s request to add a new full-time onsite engineer to support the warning system. This price also included costs for housing, living allowance, and travel expenses. Further, in our review of selected cases, we found that program officials may include other charges in price and availability data, such as nonrecurring costs that are unique one-time program-wide expenditures for certain major defense equipment sold under the FMS program; a contract administration charge—generally, 1.2 percent of the value of procured items—for services such as quality assurance and inspection; transportation costs for delivery of the item, which are generally calculated based on rates established by DSCA; and an administrative charge—currently set at 3.2 percent of the total value of the sale to recover civilian employee salaries and operational costs for administering the FMS acquisition. Various Factors Can Influence DOD’s Approach and the Timeliness of DOD’s Responses Military department officials told us that various factors influence the level of effort and information involved in developing price and availability data, some of which may also affect how long a response takes and whether the 45-day timeframe suggested by DSCA’s guidance is achieved. When a foreign customer requests price and availability data, DOD and defense contractors, if involved, expend time and resources to provide a response, all without any certainty that a sale will materialize. As such, DOD officials and defense contractors determine what level of response is appropriate, given the nature of the customer’s request and whether it includes non-standard items or items that require customization, among other things. DSCA and Navy program officials said customers are interested in receiving price and availability responses quickly and recognize that timeliness is an area of concern with the FMS process, in general. Over half of the 12 foreign customers we obtained information from noted that they are concerned with the length of time DOD’s responses can take. Lengthy response times could result in customers missing opportunities to consider potential requests in upcoming budget cycles. Several customers communicated that some responses took considerably longer than 45 days, with some taking anywhere from 6 to 12 months. Among the five examples we reviewed, responses took from 45 to 320 days, as shown in table 1. Program and security assistance officials we interviewed told us they consider the following factors: Customer interest and commitment. Insight into the degree of customer commitment to purchase through FMS may influence the time and resources military departments expend on developing a price and availability response. For example, Air Force security assistance officials told us that they may develop a more detailed response if advised by in-country personnel that a request for price and availability data will likely become a request for an actual purchase. Clarity and completeness of customer’s request. Customers may submit requests that lack the clarity and details needed to develop accurate data and estimate delivery timeframes. Several military department officials told us that when reviewing the customer’s requests for price and availability data, they often have discussions with customers to clarify requirements and in some cases estimated delivery schedules before developing a response. Defining the customer’s requirement—even at this early stage—can be an iterative process that requires multiple interactions between the foreign customer and DOD officials. In one of the examples we reviewed, the defense contractor was also involved. These discussions to clarify the customer’s requirements can prolong the process, according to several program officials. Existing policy to release price and availability data. The U.S. government’s relationship with the foreign customer and the type of defense item or service being requested—such as a weapon system with protected critical technologies versus medical evacuation equipment—can influence the length of time to obtain necessary approvals for the release of price and availability data, according to Navy program and Air Force security assistance officials. Requests for price and availability data may spur the U.S. government to review the current list of countries that have access to particular critical technologies, as shown in one Navy response to a request for a ballistic missile defense system. Initially, the Navy’s Foreign Disclosure Office determined the system would not be available for potential release and the Navy program office excluded it from the price and availability data. About a year later, according to Navy officials, following a change in U.S. policy, the Foreign Disclosure Office approved the release of price and availability data for the system and the Navy included it in a subsequent price and availability response. Complexity of the request. Requests for a non-standard system, integration with foreign components, or a complex system may cause program offices to spend additional resources and time to develop price and availability data. For example, in response to a request for a complex weapon system to be integrated into a foreign customer’s ship, Navy program officials said that they needed several months to develop price and availability data due to the complexity of this request, which required program officials to work with multiple contractors and DOD entities to develop price and availability data. In contrast, Army security assistance officials said that they generally aim to conserve resources and time by developing price and availability data based on standard items, even in instances when customers may request non-standard or complex systems. Existing workload. The volume of requests and competing priorities can also affect the timeliness and the level of effort applied to the response. For example, Army security assistance officials stated that they may prioritize a customer’s request for a letter of offer and acceptance, which initiates an executable FMS case, over a request for price and availability data because there are not resources available to do both at the same time. Availability of requested item or service. When obtaining the items from defense contractors, for example, military department officials consider production schedule and quantity—both of which require additional assumptions to estimate unknown costs. For items that are in DOD’s inventory and will not be replaced, officials are to take into account the item’s actual value when developing price and availability data, according to a DSCA publication. External factors. In cases where a customer is requesting price and availability data to decide whether to purchase defense items or services from the United States or another foreign government, military departments may expend additional resources to develop detailed price and availability data. For example, a Navy security assistance official stated that when officials are aware the customer plans to hold competitions between U.S. and foreign defense contractors, they solicit more detailed technical and cost information from defense contractors to present a competitive estimate. Individually and combined, these factors, as well as the overall process, can influence response times. Agency Comments We provided a draft of this report to the Department of Defense (DOD) for comment. DOD’s response letter is reproduced in appendix II. DOD separately provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and the Acting Secretary of Defense. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or makm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology In this report, we (1) described foreign military sales (FMS) price and availability requests Department of Defense (DOD) received from fiscal years 2014 through 2018, (2) assessed DOD’s guidance on developing price and availability data, (3) described how DOD develops price and availability data for the requested capability, and (4) identified factors that can influence the timeliness for DOD to provide price and availability data to the customer. To describe requests for price and availability data DOD received from foreign customers, we analyzed data from the Defense Security Cooperation Agency (DSCA). We reviewed data for fiscal years 2014 through 2018, the most recent 5-year period available. DSCA and other DOD components, including the military departments, use the Defense Security Assistance Management System as a workflow resource to process price and availability data requests, among other things. The system does not track which of the estimates result in a letter of offer and acceptance. To assess the reliability of Defense Security Assistance Management System data, we tested for missing data, duplicates, inconsistent coding, and compared data for five examples to price and availability documentation we received from the Army, Navy, and Air Force. We interviewed DSCA officials responsible for the data system to identify the quality controls in place to help ensure the data are accurate and reliable and discussed military department practices for using the system with security assistance officials. We found that generally the documentation for the five selected preliminary estimates matched the data DSCA provided and requests matched across multiple datasets we received from DSCA. Based on these steps, we determined the data were sufficiently reliable to report examples of the types of items and services requested and the number of requests DOD received by region, DOD component, and foreign customer. We did not report the number of responses DOD provided for these requests or how long it took DOD to provide a response to foreign customers using this data because military departments do not consistently update information in the Defense Security Assistance Management System to track the status of responses or dates when a response is provided to the customer. To assess available guidance, we reviewed DSCA and Army, Navy, and Air Force guidance for developing preliminary estimates in response to requests for price and availability data. We compared the DOD-wide guidance—the Security Assistance Management Manual—to the Standards for Internal Control in the Federal Government, which call for agencies to use quality information collected from relevant and reliable sources. Specifically, we reviewed the guidance to determine if it contained attributes that contribute to quality information such as identifying the information requirements and relevant data sources needed to develop the price and availability data. To describe factors that DOD considers when developing price and availability data and illustrate how these factors influence the process, we selected a non-generalizable sample of five responses from fiscal year 2017 data provided by the military departments. Fiscal year 2017 represented the last complete year of data available when we selected this sample. Because the sample is not generalizable, we cannot report whether practices used among the responses are used across DOD for all price and availability responses. However, these examples provide useful insight into the process and the assumptions used when developing price and availability data. We selected the five examples— one from Army, two from Navy, two from Air Force—to obtain a variety of responses, including median and large case values and a median response time. We determined there were inconsistencies in the data provided, but that the data were sufficient for our purposes of selecting a non-generalizable sample from across the military departments. For each selected example, we collected and analyzed the letter of request, price and availability data, DOD’s response to the customer, supporting documentation if provided such as clarification of the customer’s request, and data collected from defense contractors or program offices. We reviewed the assumptions and factors used in developing the data and the various elements that make up the data, such as administrative charges and costs for training and spares. We interviewed relevant DOD security assistance and program officials, and defense contractor representatives to understand the context and decisions made in developing, documenting, and communicating the price and availability data. To identify the factors that can influence the timeliness of responses, we interviewed officials from DSCA and the Army, Navy, and Air Force. We also obtained information from defense contractors and foreign customers who, as stakeholders in the FMS price and availability process, have broad insights and perspectives on the process. To gather input from foreign customers, we interviewed representatives from the Foreign Procurement Group who also solicited information from its consortium of 46 member countries on our behalf. We received responses from 12 countries—one of which was also a customer for one of the examples included in our review. To obtain contractor’s perspectives, we gathered information from five companies through interviews and attended a meeting hosted by the National Defense Industrial Association. Three of the companies we obtained information from were involved in providing cost and schedule data for four of the examples in our sample. The information we obtained from these foreign customers and defense contractors is not generalizable to all foreign customers and defense contractors. As mentioned previously, we did not assess the timeliness of DOD’s responses because DOD does not consistently track when price and availability data responses are provided to customers in the Defense Security Assistance Management System. However, the information we gathered for the five examples in our sample provided some insight about how long it took DOD to provide a response to the customer. We conducted this performance audit from June 2018 to February 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Candice Wright (Assistant Director) and Leslie Ashton (Analyst-in-Charge) managed this review. Bruna Oliveira, Carmen Yeung, Kurt Gurka, Robin Wilson, and Emily Bond made significant contributions to the work.
Why GAO Did This Study DOD manages the procurement of billions of dollars in defense items and services on behalf of foreign customers through the FMS program. These sales help support the defense industrial base and are vital to U.S. foreign policy and national security interests. The FMS process generally begins with a request by a foreign government for information about a U.S. defense item or service. Requests for price and availability data are an optional step in the process. DOD guidance is to generally respond to such requests within 45 days. The fiscal year 2018 National Defense Authorization Act included a provision for GAO to review DOD's process for developing price and availability data for foreign customers. This report addresses, among other objectives, (1) price and availability requests DOD received from fiscal years 2014 through 2018, (2) how DOD develops price and availability data, and (3) the factors that can influence the timeliness of DOD's responses to foreign customers with price and availability data. GAO analyzed DOD price and availability data for fiscal years 2014 through 2018, the latest data available; and reviewed documents for a non-generalizable sample of five price and availability responses—varying by estimate value—provided to foreign customers by the Army, Navy, and Air Force. GAO also interviewed defense contractors and DOD officials. GAO is not making any recommendations at this time. What GAO Found The Department of Defense (DOD) reported receiving 3,038 requests for Foreign Military Sales (FMS) price and availability data in fiscal years 2014 through 2018 from 93 countries across six geographic regions, as shown in the figure. Foreign customer requests included services and items such as training and support services for weapon systems, missiles, aircraft, and communication equipment. Not all countries in each region submitted a price and availability request. DOD officials indicated they generally strove to offer price and availability data that reflected rough order of magnitude estimates of total anticipated costs for a complete and sustainable capability. Contractors often provide input to DOD for these cost and schedule estimates. In the five examples GAO reviewed, DOD officials considered factors such as possible production delays and included anticipated costs for support services, operations, and sustainment, when needed. DOD officials also included FMS administrative charges and, as applicable, nonrecurring and transportation costs. GAO found that when DOD considered these factors in developing the response to the customer, at times, they made adjustments to the estimates provided by contractors to more fully reflect expected costs if the items are purchased. Among the five examples, GAO found that response times ranged from 45 to 320 days and that a number of factors can affect timeliness. For example, the complexity of the system or capability the customer is interested in acquiring may require involvement from multiple program offices and defense contractors, requiring more time than the 45 days suggested by DOD's guidance.
gao_GAO-18-550
gao_GAO-18-550_0
Background To help manage its multi-billion dollar acquisition investments across its components, DHS has established policies and organizations for requirements validation, acquisition management, and budgeting. The department uses these to monitor and guide delivery of the acquisition programs the components require to close critical capability needs, enabling DHS to execute its missions and achieve its goals. DHS and Its Components DHS has 14 components, which, as a part of their operational missions, are responsible for assessing capability needs, developing the requirements to fill these needs, and creating acquisition programs to meet these requirements. The number and cost of acquisition programs vary by component. DHS generally defines a capability as the means to accomplish a mission or objective that may be achieved through materiel and non-materiel solutions. Once the component has a JRC-validated capability gap, and identifies and documents the need for a materiel solution, it develops the operational requirements. Requirements can be unique to an individual component, or they can be joint requirements that apply to more than one component. Within the components, program management offices are responsible for planning and executing individual programs within cost, schedule, and performance parameters, and preparing required acquisition documents. Tracing Mission Needs to Program Requirements The DHS requirements process generally starts with the identification of mission needs and broad capability gaps from which components develop a program’s operational requirements, key performance parameters, and more definitive technical requirements. Figure 1 depicts this traceability from mission needs to technical requirements. Operational requirements are what the end users need to fill capability gaps and conduct the mission. Operational requirements, in part, define the purpose for the acquisition program and set boundaries for user needs. Subject matter experts, such as system engineers, support development of operational requirements to ensure that they are clearly developed. Well-defined operational requirements trace to one or more of the identified capability gaps. After components define operational requirements, they identify some as key performance parameters that denote the most important and non- negotiable requirements that the program has to meet to fulfill its fundamental purpose. According to DHS policy, failure to meet any key performance parameter results in a re-evaluation of a program that may lead to requirements changes or program cancellation. See figure 2 below for an overview of the requirements process. According to DHS policy on managing acquisition programs, components further decompose operational requirements into technical requirements, such as design or material specifications. For example, an operational requirement may be the ability to detect explosives at the airport. The technical requirement may then be the ability to detect metal or explosive material within certain parameters. DHS’s Joint Requirements Council and Other Offices Through the JRC, DHS provides oversight of operational requirements for the acquisition programs developed by its components. The JRC consists of a chair and 14 members, called principals, who are senior executives or officers that represent key DHS headquarters offices and seven of the department’s operational components. JRC principals represent the views of both their components and DHS, and validate and prioritize capability needs and operational requirements. Among other responsibilities, the JRC is to provide requirements-related advice and validate key acquisition documentation to prioritize requirements and inform DHS investment decisions for all Level 1 and Level 2 major acquisitions, as well as for programs that are joint interest, regardless of level. Separate from the JRC, DHS’s Office of Program Accountability and Risk Management, which reports directly to the Under Secretary for Management, oversees major acquisitions and guides acquisition policy. DHS also has a separate office for budget management and a planning, programming, budgeting, and execution process to allocate resources, such as funding, to acquisition programs. In addition, the Science and Technology Directorate conducts systems engineering reviews and technology assessments of the technical solutions for major acquisition programs. The Directorate also provides department-level guidance on requirements development in its Systems Engineering Life Cycle Guidebook. DHS’s Joint Requirements Process Multiple DHS directives and manuals establish the framework for the department’s Joint Requirements Integration and Management System (JRIMS)—a process by which the department reviews and validates capability gaps—and requirements to mitigate those gaps. DHS further clarified its directives in April 2016 through DHS Instruction Manual 107- 01-001-01, Department of Homeland Security Manual for the Operation of the Joint Requirements Integration and Management System. The JRC also instituted a series of training courses that provide an overview of JRIMS and its core concepts. JRC validation of requirements confirms the requirements are traceable, feasible, and cost-informed. In addition to validation by the JRC, DHS’s Under Secretary for Management approves the operational requirements that the components developed and reviews them at a series of predetermined acquisition decision events. Figure 3 depicts the acquisition life cycle established in DHS acquisition policy. DHS initially established its acquisition process in policy in November 2008. An important aspect of acquisition decision event 2A, which begins the “Obtain” phase and system development, is the decision authority’s review and approval of key acquisition documents that establish the cost, schedule, and requirements baselines for a program. The operational requirements document and acquisition program baseline are key acquisition documents requiring this approval and include a program’s key performance parameters. DHS also revisits these baselines at subsequent acquisition decision events in order to determine whether the requirements remain achievable. Prior GAO Work on DHS Requirements Development We have previously reported on the importance of stable requirements and the costs of changing them. In March 2016, we found that changes to key performance parameters have been common and are likely to continue for several reasons. While some changes may have a valid reason, such as a response to emerging threats, we found that one of the most common reasons programs changed key performance parameters was that the originally approved key performance parameters had been poorly defined. Key performance parameter changes on several programs were associated with schedule slips and cost growth. DHS leadership acknowledged that the department has had difficulty defining key performance parameters, and said that the Office of Program Accountability and Risk Management has improved its ability to help programs define key performance parameters. We recommended, among other things, that DHS should require the components to submit program funding certification memos to aid affordability discussions. DHS concurred and implemented our recommendation. In October 2016, we found that the JRC’s structure and management approach—informed by assessments of requirements processes, guidance, and lessons learned from DHS components—are generally consistent with key practices for mergers and organizational transformations. However, we recommended that DHS’s Office of the Chief Information Officer have a more formal and consistent role than that of a non-voting advisor to the JRC, since 24 of 36 major acquisitions were information technology programs, and we previously identified poor requirements definition as a factor in failed information technology programs. DHS concurred with our recommendation and implemented it in November 2016. In April 2017, we found that DHS’s acquisition policy was not consistent with acquisition best practices in terms of when to enter the “Obtain” phase depicted in figure 3. Specifically, best practices call for ensuring that a program’s needs are matched with available resources—such as technical and engineering knowledge, time, and funding—prior to starting product development. We recommended, among other things, that DHS require that major acquisition programs’ technical requirements be well-defined and conduct key technical reviews prior to approving programs to initiate product development, in accordance with acquisition best practices. DHS concurred with our recommendation, but has not yet implemented it. Over Half of the Selected Programs Changed Requirements Our analysis found that 9 of 14 programs from the seven components that we reviewed changed key performance parameters for various reasons after program approval and entry into the “Obtain” phase. DHS had initially approved most programs’ key performance parameters before DHS reestablished the JRC in November 2014. Whether these programs changed DHS-approved key performance parameters is shown in table 1. We found that the causes of these changes varied, but included requirements did not accurately describe end user needs, were not achievable given available technologies, or that programs pursued greater capability than originally intended. Further details on the nine programs that changed their requirements are in table 2. To mitigate these types of requirements changes, we identified several principles that are critical as the first steps to successful implementation of programs and the remainder of this report presents examples of when the principles have been implemented and when they have not. One of the Seven Selected Components Has a Policy for Requirements Development Among the seven DHS components we reviewed, each of which is responsible for managing major acquisition programs, only the U.S. Coast Guard has a formalized policy in place for developing requirements. Of the other six components, some are developing such policies and others rely on JRIMS guidance. In the absence of component-level policies, some sub-organizations and programs within the components have developed their own requirements policies. U.S. Coast Guard Has an Approved Requirements Policy, While the Other Six DHS Components in Our Review Do Not The U.S. Coast Guard, which has a long history of managing large acquisition programs, established a requirements policy to assess needs and fill capability gaps in 2009 and updated it in 2017. The most recent version of this requirements policy, the Coast Guard Operational Requirements Generation Manual, aligns its policies with DHS’s acquisition and requirements policies. The manual contains guidance on requirements development and the analytic efforts used to develop the requirements documents. The manual also describes the personnel that are to be included in requirements development and provides guidance on drafting the necessary documentation, and includes templates to do so. As part of the process, requirements development personnel work with end users to generate requirements, which the U.S. Coast Guard reviews and approves before going to the DHS JRC for validation. The status of developing a requirements policy across the other six components is as follows: Immigration and Customs Enforcement, National Protection and Programs Directorate, Transportation Security Administration, and U.S. Citizenship and Immigration Services officials told us that they are currently developing or considering developing policies. These components have not yet set time frames for approving these policies. A Federal Emergency Management Agency official stated that they are planning to develop a formal requirements policy but are waiting for the JRC to clarify JRIMS policy on information technology program reviews and decision authorities before doing so. However, such clarification does not prevent them from drafting an interim policy. Customs and Border Protection has a draft requirements development policy but did not provide a definitive timeline for completion. Although Customs and Border Protection does not yet have a finalized policy, the following sub-component operational organizations have documented their requirements policies. For example: Border Patrol finalized a requirements management process policy on June 12, 2018 that defined roles and responsibilities throughout the process. The requirements policy was preceded by an October, 18, 2016 policy on the process for identifying capability gaps. GAO previously reported on the Border Patrol’s policy in February 2017 and recommended clarifying the roles and responsibilities of the parties involved. The Office of Technology Innovation and Acquisition developed a draft requirements handbook in 2011 that provided guidance for the execution of activities within each stage of development, including defining operational requirements. The Passenger Systems Program Office also documented its requirements management policy in 2010 that outlined requirements development at a high level. While these sub-components have taken the key step of documenting their policies, without a single component policy, Customs and Border Protection may not be efficiently and effectively meeting its mission. Without Requirements Policies, Components Risk Failing to Meet Mission Needs In the absence of component-level policies, we found that components are less likely to establish the base of knowledge needed for requirements development. Further, we found this contributes to an inability to properly mitigate capability gaps and meet mission and end user needs. Outcomes for a number of our case study programs illustrate the potential benefits of having component-level requirements development policies in place. National Flood Insurance Program PIVOT (not an acronym): Federal Emergency Management Agency officials told us that the current attempt is the third effort to modernize its information technology systems after two failed attempts. Program officials said that one of the previous program attempts failed to meet capability gaps and end user needs because of a lack of clear policies for developing requirements. The officials said that failure is less likely as the program currently uses lessons learned from the previous attempts. In addition, the JRC is encouraging the component to adopt rigorous standards for developing requirements. However, without a policy to capture these lessons learned, programs within the Federal Emergency Management Agency are at risk for losing the knowledge. National Security Cutter: The U.S. Coast Guard began requirements development for the National Security Cutter in the late 1990s, before it had established a documented requirements development policy in 2009. We found in 2010 that the lack of overarching, formalized policy resulted in requirements that were vague, not testable, not prioritized, and not supportable or defendable. In 2014, the National Security Cutter completed initial operational testing but did not fully demonstrate 7 of its 19 key performance parameters, including those related to unmanned aircraft and cutter-boat deployment in rough seas. To meet the cutter-boat deployment parameter, U.S. Coast Guard officials said that the program had to overcome differing interpretations of the parameter between the U.S. Coast Guard and its independent test officials. One key practice for requirements development is assigning roles and responsibilities, such as when and in what capacity test officials should be involved in requirements development, to avoid just such an outcome and the resulting effect on cost and schedule. U.S. Coast Guard officials stated that end users of the National Security Cutter have since demonstrated its key performance parameters during U.S. Coast Guard operations. Electronic Baggage Screening Program: Without a finalized requirements development policy, the Transportation Security Administration’s program developed requirements that focused on how the system functioned as opposed to the capability that it would provide. Program officials said that neither the Transportation Security Administration nor the program office had a documented policy for requirements development when the program began in 2004. In this environment, the program adopted an informal approach to develop operational requirements by collecting end user input. However, officials noted that end users listed technical requirements rather than broader operational requirements. Officials told us that the program “backed into” operational requirements using these technical requirements, resulting in a system more focused on function and less on capability. Without a focus on the capability, the program risked not meeting the capability gap and end user need. We also found an example of where a component’s policy was beneficial to a program developing requirements: Offshore Patrol Cutter: The U.S. Coast Guard has matured its requirements development policies since the National Security Cutter program as described above. For the Offshore Patrol Cutter, the U.S. Coast Guard has six DHS-approved key performance parameters, such as operating range and duration. The U.S. Coast Guard plans to use engineering reviews and developmental and operational tests throughout the acquisition to refine and demonstrate requirements. For example, to refine the requirements and ensure end user input, the U.S. Coast Guard had an early operational assessment of the cutter’s key performance parameters and associated lower level technical requirements. According to officials, specific policies guided the assessment to, in part, ensure that the program refined key performance parameters before progressing through the remaining acquisition phases. DHS’s JRIMS directive and manual are not designed to provide the level of specificity for component-level requirements development. JRIMS encourages components to elicit end user needs and translate them into requirements. It also authorizes the components to develop their own policies consistent with the intent of and required capability documentation in the JRIMS manual and DHS Instruction Manual. Federal standards for internal control and key practices for requirements development, such as those in Carnegie Mellon University’s Capability Maturity Model Integration for Development, state that organizations should establish responsibility and authority by having documentation that communicates the “who, what, when, where, and why” of achieving their missions. A policy also provides a means to retain organizational knowledge and mitigate the risk of having that knowledge limited to a few personnel. Such a policy should include a documented process for developing and managing requirements which can help reduce the risk of developing a system that does not meet end user needs, cannot be adequately tested, and does not perform or function as intended. We depict four key practices for requirements management in figure 4. DHS officials indicated to us that one factor which contributes to a component’s lack of finalized requirements policies is the prioritization of starting an acquisition over developing requirements. This situation reflects what we have found over many years at the Department of Defense. Undesirable program outcomes share a common origin; decisions are made to move forward with programs before the knowledge needed to reduce risk and support those decisions is sufficient. There are strong incentives within the acquisition culture to overpromise performance while understating cost and schedule. A key enabler of successful programs is firm, feasible requirements that are clearly defined, affordable, and clearly informed. Once programs begin, requirements should not change without assessing their potential disruption to the program. Of note, DHS established its formal acquisition process in 2008, and did not have a similar emphasis on requirements development until 2016, when the JRIMS process was set forth. DHS requirements officials said that the renewed emphasis on requirements development at DHS requires a significant culture change among the components, pushing the components away from previous practices that undervalued well-defined requirements. They said that the components generally completed the necessary requirements documents to comply with department guidance and formats rather than to ensure that the components identified the needed capabilities and generated the correct requirements. DHS officials said that in the past, some program offices would contract out the capability assessment and requirements development, have them approved by DHS, but not use the resulting documentation to guide the acquisition. Two component requirements officials told us that their components’ previous acquisition and requirements processes focused on obtaining funding before developing requirements. Most components indicated that they are planning on drafting a requirements development policy. However, without specific timeframes for completing their efforts, there is a risk that management attention will not be sustained and planned actions will not be implemented. Without component-level requirements policies that are aligned with the JRC and JRIMS standards, DHS is missing an opportunity to help ensure that components’ programs are set-up from the beginning to meet end user needs and close capability gaps. Utilization of an Independent Requirements Organization Inconsistent Across Selected Components Three of the seven DHS components in our review have established requirements development organizations, such as offices or directorates independent of the acquisition function. Among the reasons cited by these components’ officials was recognition of the importance of the operational requirements development function for addressing capability gaps. Those that do not have separate requirements organizations cited, among other things, the smaller size of their components. However, according to key principles, independent lines of authority should develop operational requirements and manage acquisitions separately, regardless of size. Three Components Have Independent Requirements Development Organizations but Remaining Four Components Do Not Three of the seven DHS components in our review have established independent requirements development organizations that are separate from acquisition offices, as shown in table 3. The three components that established requirements organizations did so at various times. In 2009, the U.S. Coast Guard formally placed responsibility for its requirements development policy in its capabilities directorate under the Assistant Commandant for Capability, who reports to the Deputy Commandant for Operations, one level below the Vice Commandant of the Coast Guard. The capabilities directorate, which is separate from the acquisitions directorate, provides oversight and management of its requirements development process. This directorate provides expertise as well as an independent quality review of the requirements documents generated for approval. Customs and Border Protection officials noted that they created a requirements organization in 2010 in the Office of Technology Innovation and Acquisition. In 2016, through an organizational realignment, Customs and Border Protection separated the requirements organization and established the Planning, Analysis, and Requirements Evaluation Directorate. The officials stated that due to concerns about independence from the acquisitions office, Customs and Border Protection placed this Directorate in the Operations Support office. The Transportation Security Administration established the Office of Requirements and Capabilities Analysis in 2017, in part, because officials told us they recognized that prior requirements development efforts were not being done the right way. This new office, which is separate from the Office of Acquisition Management, reports directly to the Executive Assistant Administrator of Operations Support. The remaining four components that we reviewed did not have separate, independent requirements development organizations. Officials from Immigration and Customs Enforcement, National Protection and Programs Directorate, and U.S. Citizenship and Immigration Services noted that they are planning on developing such organizations but have not provided specific time frames for doing so. An official from the National Protection and Programs Directorate told us that although an independent office has not been established, he serves as an independent requirements official, separate from acquisitions. Among the reasons cited by components’ officials for not having a requirements organization at the time of our review was a primary focus on the acquisition function, associated funding issues, and reliance on the JRC to help refine their requirements. Officials also noted the smaller size of their respective components and the fewer number of major acquisitions as reasons for not having an independent requirements organization. Regardless of size, components need to ensure that requirements development is independent of acquisitions in order to guard against possible bias by acquisition officials toward a specific materiel solution. A Separate, Independent Requirements Organization Is Critical to Addressing Capability Gaps According to federal standards for internal controls, independent lines of authority should develop requirements and manage acquisitions separately. These standards state that management should design control activities to achieve objectives and respond to risks. In addition, authorities should segregate incompatible duties to prevent risks such as management override. For example, if requirements developers were part of the acquisition function, management could tailor operational requirements to satisfy preferred acquisition outcomes, increasing the risk that capability gaps will not be addressed. The absence of an independent requirements organization hampers the components’ ability to remove biases and identify crosscutting opportunities and investments. See figure 5 for a notional example of organizations with separate functions. In accordance with these standards, DHS, at the department level, has separate requirements, acquisitions, and resourcing organizations—each with its own governance structure. In addition, U.S. Coast Guard policy notes that requirements development, when separated from acquisition organizations, results in an operational requirements document that conveys the user’s true needs. The policy goes on to state that the requirements development organization informs the acquisition process by ensuring requirements are traceable to strategic objectives and recommended courses of action to address capability gaps are cost informed and assessed for feasibility. According to GAO’s best practices, while these organizations should be separate, there should be consistent collaboration and feedback throughout the process. We found examples of programs in our review that would have benefited from an independent organization at the component level. Immigration and Customs Enforcement, TECS Modernization (not an acronym): The acquisition program office set the requirements without an understanding of the capability gaps it was trying to close. Without a requirements development office to guide development, program officials stated that they generated approximately 25,000 requirements, which consisted of both technical and operational requirements to address the capability gaps that they were unable to prioritize. The program revised its operational requirements a few times and went through a replanning initiative that included a full review of all the requirements to ensure completeness and accuracy to determine the program’s operational requirements. Immigration and Customs Enforcement officials stated that they recognize the importance of requirements development and are in the process of establishing a requirements organization. U.S. Citizenship and Immigration Services, Transformation: The program began requirements development in 2006 in the absence of an independent organization for requirements development and has subsequently generated three operational requirements documents over a six-year period. Our review showed that the key performance parameters from the oldest document to the most recent one changed significantly. For example, the operational requirements document from 2009 had a key performance parameter called “account hardening,” which involved gathering identity and biometric evidence. The document from 2015 did not contain this parameter. In April 2015, nine years after starting requirements development, DHS leadership finalized a revised set of operational requirements after the program struggled again to meet its previous requirements. We also found an example of where a component’s requirements organization was beneficial to a program developing requirements: Customs and Border Protection, Cross Border Tunnel Threat: This program is analyzing alternative capabilities as it moves toward the JRC’s validation of its requirements. To aid in developing the operational requirements, Border Patrol, a sub-component of Customs and Border Protection, has its own Operational Requirements Management Division. In addition, Customs and Border Protection officials noted that its Planning, Analysis, and Requirements Evaluation Directorate is coordinating, guiding, and providing oversight to ensure the operational requirements address the capability gaps. In doing so, these requirements organizations facilitate input from subject matter experts on tunnel threats and from end user agents who have to mitigate these threats. Majority of Selected Components Have Not Assessed Workforce Needs or Established Training for Requirements Development We found that two components have assessed requirements workforce needs, and one has provided requirements specific training. Components gave different reasons why they have not yet taken one or more of these steps, including a lack of resources. Two Components Have Assessed Requirements Workforce Needs, and One Has Provided Requirements Specific Training Two of the seven components we reviewed, Federal Emergency Management Agency and Customs and Border Protection, performed assessments of workforce needs for requirements development. The Federal Emergency Management Agency assessed its requirements workforce needs in 2016 and found, among other things, that it does not have the capacity to identify and analyze capability gaps or accurately trace operational requirements to capability needs. As a result of the assessment, the agency requested additional requirements personnel in the fiscal year 2019–2023 budget cycle. Customs and Border Protection requirements officials stated that they last conducted an assessment in 2013. They stated that the assessment identified the appropriate number and types of personnel necessary to conduct requirements development through an analysis of historical requirements workloads. In addition, Customs and Border Protection officials said that they are currently performing an assessment as part of their Acquisition Management Performance Improvement initiative. The initiative assesses training needs and availability and is due at the end of fiscal year 2018. Requirements officials from Immigration and Customs Enforcement, National Protection and Programs Directorate, Transportation Security Administration, and U.S. Citizenship and Immigration Services told us that they have not assessed their requirements workforce needs and have no plans to do so. U.S. Coast Guard requirements officials told us that although they have not conducted a formal assessment of their workforce needs, they informally assess those needs and would like to increase the personnel who have requirements training across the organization. Although the U.S. Coast Guard has not conducted an assessment of its workforce needs, it is the only component that has an established requirements training process. Requirements officials told us that the U.S. Coast Guard initially established training and training-related certification standards in 2007 to emulate similar changes taking place at the Department of Defense and address previous U.S. Coast Guard acquisition challenges. Specifically, the U.S. Coast Guard requirements development organization assigns end users for a two to three year rotation and provides them training and certification on requirements development. The requirements development certification program is two levels and requires both classroom-based training and on-the-job experience. The U.S. Coast Guard assigns those who complete a higher level of certification to develop requirements for more complex and costly programs. This helps to ensure that requirements personnel can give timely, relevant end user input but also differentiate between operational and technical requirements. U.S. Coast Guard requirements officials told us that the training and certification standardizes the proficiency of the requirements workforce across the component. In addition, Customs and Border Protection officials told us that they are in the process of training their personnel on operational requirements development as part of a larger training program implemented through their Acquisition Management Performance Improvement effort. Components provided multiple reasons why they have not assessed their requirements workforce development needs or implemented a requirements training program. Specifically: Federal Emergency Management Agency is waiting on resources to build a requirements organization and provide component-specific training. Immigration and Customs Enforcement officials stated that they are standing up a requirements development organization and have requested additional personnel. However, they have not done a comprehensive assessment of their workforce needs nor established additional training as a result of resource constraints. National Protection and Programs Directorate requirements officials told us that they do not currently have plans to assess the sufficiency of requirements development personnel and do not have component- specific requirements training. Transportation Security Administration has recently established a requirements development organization but has not yet assessed its workforce needs or established component-specific training. U.S. Citizenship and Immigration Services requirements officials told us that they have not assessed their workforce and training needs, as they are more focused on processes supporting information technology programs rather than requirements overall. Acquisition Programs Benefit from an Adequately Staffed and Trained Requirements Workforce Assessment and training—according to GAO’s internal controls, workforce development key principles, and DHS’s workforce guidance— are two key steps in workforce planning to ensure that the right numbers of people with the right skills are available at the right time. Specifically, an assessment should include an understanding of the goals and objectives of the component, the workforce needed to achieve the goals, and the capacity and capabilities needed to support workforce strategies. With a better understanding of the needs and current capabilities of the workforce, management can develop specific strategies to better educate the workforce and standardize skill levels. Organizations can then develop specific training to develop the workforce and fill areas of identified need with involvement of management and employees. Organizations can use a variety of instruction approaches for training—for example, classroom based learning; distance learning; or structured on- the-job training. When warranted, organizations should consider blending learning methods (such as web-based and instructor-led) within the same training effort to leverage resources in the most efficient way possible. See figure 6 for a notional workforce planning process that matches workforce needs with the goals of the organization. The JRC approved a DHS-wide Requirements Specialization as a part of the Technology Manager Certification on June 21, 2018. In addition, JRC officials stated that they are expanding requirements development training and determining course content for the certification. We have previously found the importance of having the appropriate workforce as a factor in meeting an agency’s mission. Until the components assess their needs and take appropriate action, acquisition programs may continue to be at risk of not meeting end user needs, as they will not have a trained workforce to develop requirements. Selected case study acquisition programs further illustrate the effect of a trained requirements development workforce. Customs and Border Protection and Immigration and Customs Enforcement, TECS Modernization (not an acronym) programs: These programs illustrate the effect that knowledgeable requirements officials can have. Customs and Border Protection’s TECS program had an engineer with requirements development experience. According to this official, TECS Modernization traced all program requirements from the operational to the technical level in a matrix to ensure that they were valid and understood. A trained workforce, however, is one principle among many needed to provide a program with a sound start. In this case, a trained requirements official took the critical step of tracing the requirements to the gap, but his involvement cannot address the requirements and program executing issues that may arise throughout the life of a program. In fact, TECS Modernization later experienced changes to requirements and schedule. In contrast, Immigration and Customs Enforcement’s TECS Modernization program officials told us that the program initially utilized contractors for requirements development. Rather than develop operational requirements to close the capability gap, development started with thousands of technical requirements. The program could not trace these requirements back to the capability gap, and could not implement the proposed solution. Immigration and Customs Enforcement re-started the program by bringing in trained requirements development personnel who worked with the end users to determine the appropriate operational requirements. Current Immigration and Customs Enforcement officials acknowledged the problems of the past but indicated that with the operational requirements now in place, they have a greater likelihood of success. Transportation Security Administration, Electronic Baggage Screening and Passenger Screening Programs: End users of the screening units at an airport told us they are not aware of anyone, such as a requirements development official, with whom they can communicate emerging threats or problems with the screening units. They also said that some of the key performance parameters, such as the number of bags processed per hour, are not based on current data. In their experience, the volume of travelers and bags has increased significantly. Without a trained requirements development official with whom end users can provide input, the program risks not meeting end user needs. U.S. Coast Guard, Offshore Patrol Cutter: Requirements officials told us that they continue to mature their requirements workforce to ensure the appropriate requirements for programs. The U.S. Coast Guard’s requirements workforce, as stated previously, utilizes an end user with requirements training as a subject matter expert on requirements development. These end users with requirements training work together with end users currently using the assets to ensure that requirements are well-defined. For this program, the U.S. Coast Guard recently held an assessment of the draft requirements for the cutter that solicited input from users across the organization. The trained requirements personnel facilitated the assessment and gathered the input to refine the requirements. While it is too early to determine how this acquisition program will perform against baselines, this initial focus on requirements is positive. As most components recognize the need for requirements development, it is important that they assess their needs for a workforce and align those needs with training to develop a workforce that can help ensure that requirements match end user needs. DHS is taking steps to standardize training and certification across its requirements workforce to ensure that the workforce across all levels implements requirements development in accordance with JRIMS. However, DHS remains at risk until such training and certification are fully implemented throughout DHS and its components. Conclusions While DHS now has the JRIMS in place, which authorizes the components to create their own internal requirements development organizations, the components lag in creating the means to develop requirements and close identified capability gaps. While DHS components generally are working toward developing their own requirements policies, they have not yet established timeframes for completing this effort. Without specific timeframes, there is the risk that management attention will be lost. Further, some components do not have in place independent requirements development organizations, separate from their acquisition functions. The overlap in these responsibilities does not comport with best practices and engenders a risk that acquisition officials may override requirements developers to procure a preferred solution as opposed to the one needed by the end user. Further, most of the components in our review have not taken steps to assess their requirements workforces and provide training. Compounding this problem is a lack of training and certification standards for requirements personnel at the agency level. Rather, components have prioritized obtaining funding and starting acquisition programs over requirements development. Not giving requirements development adequate priority is likely to contribute to poorly defined requirements and delays in achieving—or failure to achieve—the capabilities necessary to perform components’ missions. DHS, at a department level, has recognized the importance of having a requirements policy, an independent requirements organization, and a trained workforce by establishing JRIMS, the JRC, and associated training. While the components vary in acquisition activity, it is incumbent on them to recognize the importance of these critical elements. Past acquisitions have demonstrated the need to do so. Recommendations for Executive Action We are making a total of 25 recommendations to the Secretary of DHS. Specifically, that the Secretary of DHS ensures that: The Commissioner of Customs and Border Protection through the Executive Assistant Commissioner for Operations Support finalizes and promulgates the Customs and Border Protection’s draft policy for requirements development. (Recommendation 1) The Commissioner of Customs and Border Protection through the Executive Assistant Commissioner for Operations Support updates the 2013 workforce assessment to account for the independent requirements organization’s current workforce needs. (Recommendation 2) The Commissioner of Customs and Border Protection through the Executive Assistant Commissioner for Operations Support establishes component specific training for requirements development. (Recommendation 3) The Administrator of the Federal Emergency Management Agency establishes a policy for requirements development. (Recommendation 4) The Administrator of the Federal Emergency Management Agency establishes an independent requirements development organization within the Federal Emergency Management Agency. (Recommendation 5) The Administrator of the Federal Emergency Management Agency updates the 2016 workforce assessment to account for an independent requirements organization’s workforce needs. (Recommendation 6) The Administrator of the Federal Emergency Management Agency establishes component specific training for requirements development. (Recommendation 7) The Director of Immigration and Customs Enforcement establishes a policy for requirements development. (Recommendation 8) The Director of Immigration and Customs Enforcement establishes the planned independent requirements development organization within Immigration and Customs Enforcement. (Recommendation 9) The Director of Immigration and Customs Enforcement conducts a workforce assessment to account for an independent requirements organization’s workforce needs. (Recommendation 10) The Director of Immigration and Customs Enforcement establishes component specific training for requirements development. (Recommendation 11) The Under Secretary of Homeland Security for the National Protection and Programs Directorate finalizes and promulgates the National Protection and Programs Directorate’s draft policy for requirements development. (Recommendation 12) The Under Secretary of Homeland Security for the National Protection and Programs Directorate establishes the planned independent requirements development organization within the National Protection and Programs Directorate. (Recommendation 13) The Under Secretary of Homeland Security for the National Protection and Programs Directorate conducts a workforce assessment to account for an independent requirements organization’s workforce needs. (Recommendation 14) The Under Secretary of Homeland Security for the National Protection and Programs Directorate establishes component specific training for requirements development. (Recommendation 15) The Administrator of the Transportation Security Administration through the Executive Assistant Administrator of Operations Support finalizes and promulgates the Transportation Security Administration’s draft policy for requirements development. (Recommendation 16) The Administrator of the Transportation Security Administration through the Executive Assistant Administrator of Operations Support conducts a workforce assessment to account for an independent requirements organization’s workforce needs. (Recommendation 17) The Administrator of the Transportation Security Administration through the Executive Assistant Administrator of Operations Support establishes component specific training for requirements development. (Recommendation 18) The Commandant of the U.S. Coast Guard through the Assistant Commandant for Capability conducts a workforce assessment of the U.S. Coast Guard’s capabilities directorate. (Recommendation 19) The Director of the U.S. Citizenship and Immigration Services finalizes and promulgates the U.S. Citizenship and Immigration Services’s draft policy for requirements development. (Recommendation 20) The Director of the U.S. Citizenship and Immigration Services establishes the planned independent requirements development organization within U.S. Citizenship and Immigration Services. (Recommendation 21) The Director of the U.S. Citizenship and Immigration Services conducts a workforce assessment to account for an independent requirements organization’s workforce needs. (Recommendation 22) The Director of the U.S. Citizenship and Immigration Services establishes component specific training for requirements development. (Recommendation 23) The JRC collaborate with components on their requirements development policies and, in partnership with the Under Secretary for Management, provide oversight to promote consistency across the components. (Recommendation 24) In addition, the Secretary of DHS should ensure that training for requirements development is consistent by establishing training and certification standards for DHS and the components’ requirements development workforces. (Recommendation 25) Agency Comments and Our Evaluation We provided a draft of this report for review and comment to DHS. DHS provided written comments, which are reproduced in appendix II. In its comments, DHS concurred with all 25 of our recommendations and identified actions it plans to take to address them. DHS also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and the Secretary of Homeland Security. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or makm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology This report discusses (1) how often selected Department of Homeland Security (DHS) programs changed requirements; and assesses the extent to which the selected components have (2) developed policies for requirements development, (3) established independent requirements organizations, and (4) taken steps to assess and train a requirements workforce. Our focus for this report was on the DHS components, as they are responsible for developing the requirements to meet end user needs. To conduct our work, we reviewed the DHS Master Acquisition Oversight List as of April 2017 and selected seven DHS components with Level 1 and Level 2 major acquisition programs and cover a broad range of missions. The seven components are as follows: Customs and Border Protection Federal Emergency Management Agency National Protection and Programs Directorate U.S. Citizenship and Immigration Services From these seven components, we selected 14 major acquisition programs with DHS-approved key performance parameters to serve as case studies for our review. We selected a non-generalizable sample of programs based on different factors, including their acquisition phase, component, acquisition level, and whether they were information technology (IT) or non-IT. We selected the programs on these factors to reflect the broad spectrum of DHS components’ operations. In addition, we coordinated our program selection with the DHS Office of Inspector General due to its ongoing audit on the implementation of Joint Requirements Council (JRC) policies in DHS acquisition programs. See table 4 below for a description of the programs. We also reviewed two programs that did not have DHS-approved key performance parameters at the time of our review to understand how requirements are determined before DHS validation. The two programs were Customs and Border Protection’s Cross Border Tunnel Threat and Biometric Entry-Exit Program. To determine the extent to which the selected programs changed operational requirements, we examined key performance parameters, which the programs document in requirements and acquisitions documents, before and after DHS approval when key performance parameters should be stable. Such program documents include the operational requirements documents and acquisition program baselines. In certain cases, programs had multiple iterations of these documents. We then compared the extent to which key performance parameters changed between documents. We selected operational requirements documents and acquisition program baselines because these are the key requirements documents validated by DHS management in order for programs to begin development. We focused on the presence of policies for requirements development, independent requirements organizations, and requirements specific workforce and training in components as our past work on major acquisitions has shown that these are the fundamental building blocks required to develop well-informed operational requirements. This selection was also informed by our standards for internal controls. To determine the extent to which DHS components’ requirements development policies exist, as well as determine the extent to which those components established independent organizations, we reviewed component documentation pertaining to requirements development, such as instruction manuals, mission statements, and capability analyses. We also reviewed DHS documentation such as the Joint Requirements Integration and Management System Instruction Manual and the Acquisition Management Instruction to determine the requirements development guidance provided to the components. We also reviewed program-level documents such as mission need statements and operational requirements documents to determine the capability gaps that respective programs were intended to mitigate, and the programs’ key performance parameters. To help determine assessment, training, and certification standards for DHS’s requirements development workforce, we spoke with officials from Defense Acquisition University regarding comparable standards that apply to the Department of Defense’s requirements workforce. We also reviewed training standards materials provided by these officials. In addition, we spoke with JRC and U.S. Coast Guard officials regarding their requirements development training and certification standards and reviewed available documentation. To inform each of our objectives, we interviewed officials at various levels throughout DHS to understand their relationship to requirements development. We interviewed JRC officials to determine their interaction with components for requirements development, policies, training, and organizational standards. We also interviewed component-level officials to understand the extent to which they have implemented requirements development policies, organizations, and training for their components. We then interviewed both program officials and program end users to understand their roles in requirements development, the extent to which their feedback is incorporated into the requirements development process, and then the extent to which they receive requirements development training. In addition, we furthered this understanding through reviewing component- and program-level documentation including guidance manuals, mission needs statements, and operational requirements documents. We assessed the components’ requirements development practices against GAO’s standards for internal control and additional supporting criteria. The standards identify key principles to help entities achieve their objectives, such as delivering capabilities to end users. Specifically, management should establish structure, responsibility, and authority including developing an organizational structure and documentation. In addition, management should have a commitment to competence by developing individuals, such as through training. We conducted this performance audit from May 2017 to August 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Homeland Security Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Marie A. Mak, (202) 512-4841, or makm@gao.gov. Staff Acknowledgments In addition to the contact named above, J. Kristopher Keener, Assistant Director; James Kim; Stephen V. Marchesani; Cody Knudsen; Claire McGillem; Pete Anderson; Roxanna Sun; and Sylvia Schatz made key contributions to this report.
Why GAO Did This Study GAO has previously found that DHS's components had acquisition programs that did not meet requirements and that those requirements were, in some cases, poorly defined. Poorly defined requirements increase the risk that acquisitions will not meet the needs of users in the field—for example, border patrol agents or emergency responders. GAO was asked to examine DHS components' practices for developing requirements. This report addresses the policies, organizations, and workforce that selected DHS components use to develop requirements for their acquisition programs. GAO selected seven DHS components with significant acquisition programs and a non-generalizable sample of programs—based on cost, component, and acquisition phase—as case studies. GAO analyzed policies and program documentation; and interviewed DHS and component officials, as well as end users of DHS programs. GAO compared components' practices to industry best practices and federal internal control standards. What GAO Found GAO has identified several best practices to ensure that operational requirements for acquisitions are well-defined and found some Department of Homeland Security's (DHS) components met them while others did not. These practices include a formal policy for developing requirements, an independent requirements organization, and an understanding of workforce needs and training. The table below shows GAO's assessment of seven of DHS's components against these practices. Establishing a formal policy to guide the process is critical to developing well-defined requirements. However, only the Coast Guard has an approved policy for requirements development among the seven components reviewed. Without well-defined requirements, components are at risk of acquiring capabilities that will not meet mission needs. DHS officials told GAO that components have generally prioritized obtaining funding and starting programs over developing requirements. Three components have a requirements development organization, separating requirements from acquisition in addressing capability gaps. Officials from components without such organizations told GAO that they have fewer major acquisitions and rely on DHS to assist in requirements development. DHS policy and best practices, however, maintain the importance of this separation regardless of the number of major acquisitions to guard against possible bias by acquisition officials toward a specific materiel solution. Two components have assessed requirements development workforce needs, but both need to be updated; and one component has provided requirements development training and certification. Other component officials told GAO that they lack the resources necessary to take these steps. Best practices indicate that without an appropriately sized and trained workforce, components remain at risk of acquiring capabilities that fail to meet end user needs. What GAO Recommends GAO is making 25 recommendations, including to individual components to establish policies and independent organizations for requirements development, assess workforce needs, and establish training and certifications. DHS concurred with all the recommendations.
gao_GAO-18-99
gao_GAO-18-99_0
Background NNSA’s strategic materials programs include a broad range of activities. The programs often include (1) building unique new facilities, (2) modifying and repairing existing facilities and equipment, and (3) developing and deploying new technologies for processing and producing strategic nuclear materials. The programs may involve multiple NNSA and DOE sites and multiple facilities at a given site. For example, since the days of the Manhattan Project, a large portion of the nation’s uranium mission has been executed at the Y-12 National Security Complex in Oak Ridge, Tennessee, with uranium production and associated operations housed in several nuclear facilities within the complex. These facilities are in some cases more than 60 years old. NNSA’s uranium program is coordinating efforts to build the UPF, invest in the infrastructure of existing facilities to extend their lives, and develop and deploy several new technologies that are expected to increase the efficiency and effectiveness of uranium processing. Collectively, these uranium program activities may take more than 2 decades to implement and cost several billion dollars. NNSA’s 2017 future-years nuclear security program estimate projected that NNSA would need about $1.4 billion in fiscal year 2018 to carry out its annual activities associated with the management of these strategic materials programs (see table 1). NNSA documents indicate that the agency expects to spend about $7.7 billion over the next 5 years on activities related to managing its strategic materials. This spending, which would represent about 12 percent of the approximately $63 billion NNSA expects to spend on all weapons activities over this same time period, includes: $4.8 billion for costs related to construction of facilities and other capital equipment purchases that will be used to support the strategic materials mission; and $2.9 billion for program costs related to general activities such as reducing risk and ensuring sufficient supply, as well as the consolidation, disposition, tracking, and accounting of nuclear materials. Program managers are an important part of the federal government’s workforce. They interact with the managers of individual projects to provide support and guidance on those projects but also must take a broad view of the overall objectives of programs and an agency’s organizational culture. According to leading practices outlined by the Project Management Institute, organizations develop program plans, capture and understand stakeholder needs, and establish processes for maintaining program management oversight, among other activities. Recognizing the importance of improving program management, in December 2016 the President signed the ‘‘Program Management Improvement Accountability Act” that required the Office of Management and Budget to, among other things, adopt and oversee implementation of government-wide standards, policies, and guidelines for program and project management for executive agencies and assess the quality and effectiveness of program management for these agencies. We have previously reported on DOE’s and NNSA’s program management challenges. In March 2009, we found that NNSA and the Department of Defense (DOD) established unrealistic schedules, did not establish consistent cost baselines, and did not effectively manage technical risks in some of their nuclear weapon life extension programs. These problems resulted in delays, additional expenditures, difficulties tracking the cost of the programs, and difficulties in meeting all of NNSA’s and DOD’s technical objectives. We recommended that NNSA develop and use consistent budget assumptions and criteria for the baseline to track costs over time, among other actions. NNSA agreed with our recommendations and made changes to its cost estimating procedures. In November 2014, we found that the lack of requirements for programs meant that DOE could not ensure that it was developing fully credible cost estimates for programs. We recommended that DOE revise its program management directives to require that programs develop life-cycle cost estimates in accordance with our 12 cost-estimating best practice steps. DOE agreed with our recommendation but has not yet incorporated the best practice steps into its program management directives. In February 2016, we found that the B61-12 life extension program, the most complex such program NNSA has undertaken to date, faces ongoing management challenges in some areas, including staff shortfalls and an earned value management system that has yet to be tested. We did not make any recommendations but reiterated previous recommendations such as those already mentioned. In November 2016, we found that DOE and NNSA had not established organization-wide policies or practices addressing leading practices related to program management, and we recommended that DOE do so. DOE did not agree or disagree with this recommendation. NNSA, however, in late 2016 instituted a training program for program management. NNSA’s stockpile stewardship program has established strategic materials as one of the major elements to sustain the nation’s nuclear weapons stockpile. According to NNSA budget documents, the strategic materials programs help ensure the sustainment of nuclear material processing capabilities and fund the stabilization, consolidation, disposition, tracking, and accounting of nuclear materials. Strategic materials are generally not available, or are available only in limited quantities, from commercial suppliers because of their specific properties and use in nuclear weapons or for other national security purposes. NNSA named strategic material program managers in 2014 and 2015 to integrate, oversee, plan, and execute material strategies for uranium (including domestic uranium enrichment), plutonium, and tritium. In addition to the general program management challenges highlighted above, we have also reported previously on challenges facing NNSA’s strategic materials programs: In July 2015, we found that NNSA had identified various challenges in its lithium production strategy that may impact its ability to meet demand for lithium in the future. These challenges included insufficient supply of lithium material and constraints facing NNSA’s efforts to replace the aging lithium production facility. We recommended that NNSA objectively consider all alternatives, without preference for a particular solution, as it proceeds with its analysis of alternatives process. NNSA neither agreed nor disagreed with our recommendation but did undertake a formal analysis of alternatives in 2017, according to NNSA officials. In August 2016, we found that NNSA had not documented important requirements for its plutonium program at Los Alamos National Laboratory in New Mexico. We recommended that, among other things, NNSA should update its program requirements. NNSA outlined actions taken and planned to address this recommendation. NNSA Has Defined Strategic Materials Program Requirements, Including Roles and Responsibilities for Program Managers NNSA’s Office of Defense Programs has set program requirements for the strategic materials programs and has established the roles and responsibilities of the programs’ managers. NNSA defined these program requirements in two documents issued in 2016 and 2017. Collectively these documents set documentation requirements as well as established the roles and responsibilities of the strategic materials program managers. According to NNSA officials, these requirements apply to each of the programs, including the lithium program. These requirements are outlined below. Program Execution Instruction (2016) – In January 2016, NNSA approved a Program Execution Instruction that defines requirements for carrying out NNSA defense programs, such as the strategic materials programs. This instruction outlines a series of requirements that vary based on the categorization—and therefore the rigor—of management applied to a program. Of the four categories outlined in the instruction—Standard Management, Enhanced Management A, Enhanced Management B, and Capital Acquisition Management—NNSA has generally designated the strategic materials programs as “Enhanced Management B,” the most rigorous designation applicable to this type of program, according to NNSA officials. The “Enhanced Management B” programs are required to have the following elements documented: a program plan, a work breakdown structure that details the work elements necessary to organize the total work scope with cost estimates, a decision analysis, an integrated master schedule that includes the entire scope of work required for the program’s successful execution, a performance management approach, and a lessons learned/best practices review. According to the instruction, if the scope, cost, and schedule of a program are more complex, moving to a more rigorous program management category is often required. According to the instruction, when enhanced complexity and risk are associated with a program, among other things, “Enhanced Management B” is the appropriate designation. The instruction also allows for programs to “tailor,” or modify, the application of certain requirements depending on risk and other factors. Program Management Policy for Weapons and Strategic Materials Programs (2017) – NNSA issued a program management policy in January 2017 that defines general roles and responsibilities for all four strategic materials program managers. This policy broadly outlines the managers’ authority and responsibilities for managing the strategic materials; these responsibilities include developing program documentation and managing risk. According to NNSA officials we interviewed, the policy is based on NNSA’s experience in implementing the uranium program in 2014. The policy requires each of the strategic materials programs to develop a number of guidance documents, including a mission strategy, mission requirements, and a technology development plan. For each program, the policy also requires the formation of a strategic materials mission working group that is comprised of the key stakeholders involved in the program. NNSA Officials Reported Progress in Meeting Strategic Materials Program Requirements but Challenges from Staffing Shortages NNSA officials told us that they are making progress in implementing the program requirements outlined for each of the strategic materials programs, although some are further along than others. However, these officials said that relatively few staff had been assigned to these programs, which has challenged implementation efforts. Progress Reported in Implementing Program Requirements For its two strategic materials programs established in 2014—uranium and domestic uranium enrichment—NNSA officials told us that they are generally meeting the strategic materials program management requirements outlined in the Program Execution Instruction and the Program Management Policy for Weapons and Strategic Materials. NNSA officials identified documents for each program, including mission strategy, mission requirements, program plan, and work breakdown structure. For the other programs, according to agency officials, NNSA is still working to meet these requirements, though the tritium program met all requirements during the course of this review. More specifically, according to agency officials: The plutonium sustainment program has met some of the Program Execution Instruction requirements to date, including having in place a program plan, work breakdown structure, and decision analysis, but not an integrated master schedule (although one is being developed, according to agency officials). The plutonium program also has a mission strategy in place, as called for by the Program Management Policy for Weapons and Strategic Materials, but has not yet met the other strategic materials program management requirements. According to agency officials, those requirements are being developed. The tritium sustainment program has recently met the Program Execution Instruction requirements as well, including having a program plan, work breakdown structure, integrated master schedule, and performance management approach in place. Additionally, the program recently updated documentation to meet the Program Management Policy requirements including revising its Strategic Material Mission Working Group in 2017, according to agency officials. The lithium program is early in its development, and no program manager has been appointed yet, pending senior NNSA leadership decisions. NNSA has a lithium mission strategy, a mission requirements matrix, and a technology development plan in place, as required by the Program Management Policy for Weapons and Strategic Materials, but the rest of the strategic materials program management requirements are still in the process of being developed, according to agency officials. NNSA officials said that even though the lithium program is not subject to the same requirements, they intend for it to meet all of the same requirements as the other strategic materials programs. Staffing Challenges Reported Officials from the Office of Defense Programs, including the strategic materials program managers themselves, said that a shortage of staff has presented a challenge in terms of implementing the requirements of the strategic materials programs and meeting their missions. According to NNSA officials, all of the strategic materials programs have been assigned relatively few federal staff to implement the programs. The officials also said that while they plan to have all five strategic materials programs fully meet the requirements and operate as cohesive programs, the lack of staff has hampered their efforts to do so. For example, the plutonium manager said more staff were needed to successfully implement the program, and the lithium lead point of contact said that at least two full-time staff members would be required to accomplish the work needed to make the lithium program meet program requirements. Specifically, according to agency officials as of October 2017, in addition to contractor support: the uranium program had the program manager and two federal staff assigned; the domestic uranium enrichment program had the program manager and one federal staff assigned; the plutonium program had the program manager and one federal staff member; the tritium program had the program manager and no dedicated staff, relying instead on staff in other programs such as a federal program manager from a different program who acts as staff for this program; and the lithium program had the lead point of contact and no dedicated staff, although a contracted senior technical advisor provides some support. NNSA officials cited competing agency priorities and current perceived staffing limits as the primary impediments to assigning more staff to these programs. First, according to agency officials, the relative newness of the strategic materials programs and competing agency priorities to modernize the nuclear weapons infrastructure and modernize and extend the lives of current nuclear weapons have meant that federal staff are in high demand across the agency. This concern is consistent with issues we have identified in our past work as well. For example, in April 2017, we noted NNSA’s ambitious, costly, decades-long effort to modernize the nation’s nuclear security enterprise. In addition to ongoing and planned infrastructure modernization, some of which is associated with the strategic materials programs, this modernization includes four ongoing expensive weapons refurbishments and efforts to improve the agency’s research, development, testing, and evaluation capabilities by, for example, continuing efforts in advanced modeling, simulation, and computing. Similarly, we found in September 2016 that the competing agency priorities for infrastructure modernization and weapons refurbishments had negatively affected another NNSA program: the Enhanced Surveillance Program. Second, NNSA officials said that they have limited flexibility when it comes to increasing federal staff levels. Specifically, in each year that the total number of federal employees at NNSA exceeds 1,690, the Administrator is required by law to submit to the congressional defense committees a report justifying such excess. In the NNSA Administrator’s testimony before the Senate Appropriations Subcommittee on Energy and Water Development in June 2017, he stated that since 2010, NNSA’s program funding had increased 28 percent, while its federal staffing levels had decreased by 17 percent. He said that initial results from a yet-to-be- completed study by the Office of Personnel Management in support of the Reform of Government Initiative indicate the need for a 20 percent increase in federal staff at NNSA. We have also previously reported that staffing shortages have affected NNSA’s efforts to improve management capability. For example, we reported in October 2014 that NNSA determined that inadequate levels of federal staff had contributed to management problems with the UPF project. As a result, NNSA increased staffing levels for the UPF project office from 9 full-time equivalents in 2012 to more than 50 as of January 2014. According to NNSA officials, the additional staff enabled NNSA to conduct more robust oversight of the contractor’s design efforts than was previously possible. Similarly, in 2016, we found that the B61-12 life extension program, the most costly and complex such program undertaken to date, successfully requested that NNSA enlarge its program office staff from 3 to 8 full-time equivalent staff to provide more management capability. However, we found that even with this increase in federal staff, some NNSA and DOD officials said that they believe that NNSA needs two to three times more personnel in the federal program manager’s office to ensure sufficient federal management and oversight. One area that we noted in this review is that with regard to the strategic materials programs, NNSA has not conducted a workforce needs assessment. Strategic materials program officials acknowledged that they had neither specifically assessed the number or skills of staff needed to manage the strategic materials programs, nor did they have current plans to do such an assessment. Our prior work on strategic human capital management has identified certain activities or practices that can help an agency strategically manage its human capital. These activities include determining the critical skills and competencies that will be needed to achieve the programs’ missions and developing strategies to address gaps in the number, deployment, and alignment of staff needed. NNSA officials said that individual offices have attempted over time to assess resource and skill needs but that these efforts have been hampered by, among other things, a lack of staff. By determining the critical skills and competencies needed to achieve each strategic material program’s mission and using this determination to develop strategies to address any gaps in the number, deployment, and alignment of staff needed, NNSA may find it has better information to justify increased staffing levels for its strategic materials programs. Conclusions Since 2014, NNSA has taken steps to establish programs to maintain and modernize the nation’s nuclear weapons stockpile, including appointing federal program managers for four of the five strategic materials programs, as well as steps to establish and organize the programs according to internal program management requirements. This is a significant step given the importance, cost, and complexity of these strategic materials programs. However, NNSA has made varying progress implementing these strategic materials programs, in part because these programs may not have been allotted staff and management capacity commensurate with their cost and scope of work. Although strategic materials program officials acknowledged staffing limitations, they have not determined the critical skills and competencies that will be needed to meet program requirements and, ultimately, achieve the programs’ missions. By determining the critical skills and competencies needed to achieve each strategic materials programs’ missions and using that determination to develop strategies to address any gaps in the number, deployment, and alignment of staff needed, NNSA may find it has more information to justify increased staffing levels for its strategic materials programs. Recommendation for Executive Action The NNSA Administrator should determine the critical skills and competencies that will be needed for the strategic materials programs and use this determination to develop strategies for addressing challenges, if any, related to the number, deployment, and alignment of program staff (Recommendation 1). Agency Comments We provided a draft of this report to DOE and NNSA for their review and comment. NNSA provided written comments, which are reproduced in full in appendix II, as well as technical comments, which we incorporated in our report as appropriate. In its comments, NNSA agreed with our recommendation and stated that the recommendation is consistent with the programs’ current evolution. NNSA further stated that it recognizes the need to define the range of skills and competencies necessary to execute the programs' critical missions and that it plans to identify the complete set of core competencies needed for these programs by December 31, 2018. We are sending copies of this report to the appropriate congressional committees, the Secretary of Energy, the Administrator of the National Nuclear Security Administration, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or trimbled@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Strategic Nuclear Materials Managed by the National Nuclear Security Administration (NNSA) Appendix I: Strategic Nuclear Materials Managed by the National Nuclear Security Administration (NNSA) NNSA has established programs for ensuring the supply of each of the following strategic materials as well as the capability to process them: Uranium – National security needs for uranium are met using a large existing inventory of previously enriched uranium. Although NNSA has estimated that stocks are sufficient for projected needs, existing uranium needs to be purified, machined, and recovered from existing operations. The Y-12 National Security Complex in Oak Ridge, Tennessee, is the NNSA site for conducting enriched uranium activities, producing uranium-related components for nuclear warheads and bombs, and processing feedstock for nuclear fuel for the U.S. Navy. In 2004, NNSA decided to construct a new Uranium Processing Facility (UPF) that consolidated the functions of four separate uranium facilities into a single building. In 2014, NNSA, on the advice of a peer review team, decided to pursue a uranium program that includes a smaller UPF and, among other program elements, modifications to existing uranium buildings and capabilities to include several new uranium processing technologies. Construction on the UPF continues at the Y-12 site, and NNSA continues to request funds for that project. Fiscal year 2018 funds are to be used for construction of some related subprojects. According to NNSA officials, the UPF is expected to be complete by 2025 and cost no more than $6.5 billion. NNSA estimates that additional investments needed to upgrade existing uranium facilities will cost about $20 million per year for the next 20 years. Domestic Uranium Enrichment – To produce tritium, the Tennessee Valley Authority (TVA) must use unobligated uranium in certain nuclear reactors, under an interagency agreement between Department of Energy (DOE) and TVA. The United States has not had a sustained uranium enrichment capability since the 2013 closure of the Paducah Gaseous Diffusion Plant, which was originally constructed in 1952. In 2014, NNSA created the domestic uranium enrichment program manager position with responsibility to sustain the agency’s supply of low-enriched uranium for tritium production. We currently have ongoing work reviewing the program’s plan to ensure supply through 2060. NNSA estimated that over the next 5 years alone, these activities will likely cost more than $400 million. Plutonium – A set of aging facilities at Los Alamos National Laboratory provides the backbone of NNSA’s plutonium work, such as certifying the safety of existing nuclear weapons’ plutonium pits and producing new pits to extend the life of nuclear weapons in the stockpile. NNSA conducts plutonium analysis in the Chemistry and Metallurgy Research facility, which was built in the 1950s, but NNSA plans to cease programmatic operations in this facility by 2019 because of its aging infrastructure and because it sits on a seismic fault line. NNSA produces pits and conducts pit surveillance in the 38- year-old high-hazard, high-security Plutonium Facility 4 at Los Alamos. Other important plutonium activities, such as NNSA’s plutonium disposition efforts and the processing of plutonium used to provide heat sources for space missions, are not included in the plutonium manager’s portfolio because other program offices are responsible for these activities, according to NNSA officials. Officials said that these program offices coordinate capability and facility needs with the plutonium program manager. In August 2014, DOE cancelled plans to construct the nuclear facility that was part of the overall Chemistry and Metallurgy Research Replacement (CMRR), which was approved in 2005 to replace the aging Chemistry and Metallurgy Research facility. In its place, DOE approved the implementation of the first part of NNSA’s new plutonium strategy: the revised CMRR project, which includes a subproject to remove contaminated equipment no longer in use in Plutonium Facility 4, install new plutonium analysis equipment, and modify an existing building to handle higher quantities of plutonium. NNSA estimated that the CMRR project would cost from $2.4 billion to $2.9 billion and be completed by 2024. In addition, in November 2015, DOE approved the mission need for the implementation of the second part of the strategy: building modular nuclear facilities to add high- hazard, high-security laboratory space at Los Alamos (the Plutonium Modular Approach) to meet plutonium pit production requirements. NNSA estimated that the Plutonium Modular Approach could cost from $1.3 billion to $3.0 billion and be completed by the end of 2027. Tritium – NNSA has relied on tritium produced many years ago; recycling and recovery of existing tritium is currently the source of most of the tritium in the stockpile, according to NNSA officials. However, tritium decays relatively rapidly, and in 2015 NNSA identified a need to produce additional tritium. To produce tritium, lithium target rods—called tritium-producing burnable absorber rods— are irradiated in TVA’s reactors. The irradiated rods are transported to DOE’s Tritium Extraction Facility at the Savannah River Site in South Carolina, where they are processed in a specialized facility to extract and then prepare the tritium for nuclear warheads. NNSA requested $9.8 million in design funds in fiscal year 2018 for construction of a new tritium production capability. In its fiscal year 2018 budget request, NNSA estimated that this facility would cost about $425 million and be approved for operations in 2027. Lithium – Lithium is a key component of nuclear weapons and is essential for their refurbishment. NNSA has a sufficient supply of enriched lithium-6 (the isotope used in refurbishments and for tritium production), but that lithium is stored in another form and must undergo complex processing before it can be used for these purposes. NNSA halted certain aspects of its lithium processing operation—conducted at its Y-12 site in Oak Ridge, Tennessee—in May 2013 due to the condition of the site’s 72-year-old lithium production facility. Currently, NNSA is relying on a less complex but also less efficient process that results in a loss of approximately 50 percent of material. In 2013, NNSA developed a lithium production strategy that proposed a new lithium production facility, which the agency estimated would cost more than $500 million. NNSA plans to request $30.4 million in fiscal year 2019 for construction of this facility. This strategy includes sustaining current infrastructure and deploying new technologies to sustain lithium production. Appendix II: Comments from the National Nuclear Security Administration Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact above, Jonathan Gill (Assistant Director), Alisa Beyninson, Antoinette Capaccio, Jeff Larson, Cynthia Norris, and Kiki Theodoropoulos made key contributions to this report. Related GAO Products Modernizing the Nuclear Security Enterprise: A Complete Scope of Work Is Needed to Develop Timely Cost and Schedule Information for the Uranium Program. GAO-17-577. Washington, D.C.: September 8, 2017. Program Management: DOE Needs to Develop a Comprehensive Policy and Training Program. GAO-17-51. Washington, D.C.: November 21, 2016. DOE Project Management: NNSA Needs to Clarify Requirements for Its Plutonium Analysis Project at Los Alamos. GAO-16-585. Washington, D.C.: August 9, 2016. Modernizing the Nuclear Security Enterprise: NNSA’s Budget Estimates Increased but May Not Align with All Anticipated Costs. GAO-16-290. Washington, D.C.: March 4, 2016. Modernizing the Nuclear Security Enterprise: NNSA Increased Its Budget Estimates, but Estimates for Key Stockpile and Infrastructure Programs Need Improvement. GAO-15-499. Washington, D.C.: August 6, 2015. DOE Project Management: NNSA Should Ensure Equal Consideration of Alternatives for Lithium Production. GAO-15-525. Washington, D.C.: July 13, 2015. DOE and NNSA Project Management: Analysis of Alternatives Could Be Improved by Incorporating Best Practices. GAO-15-37. Washington, D.C.: December 11, 2014. Project and Program Management: DOE Needs to Revise Requirements and Guidance for Cost Estimating and Related Reviews. GAO-15-29. Washington, D.C.: November 25, 2014. Nuclear Weapons: Some Actions Have Been Taken to Address Challenges with the Uranium Processing Facility Design. GAO-15-126. Washington, D.C.: October 10, 2014. Nuclear Weapons: Technology Development Efforts for the Uranium Processing Facility. GAO-14-295. Washington, D.C.: April 18, 2014. Plutonium Disposition Program: DOE Needs to Analyze the Root Causes of Cost Increases and Develop Better Cost Estimates. GAO-14-231. Washington, D.C.: February 13, 2014. Nuclear Weapons: Information on Safety Concerns with the Uranium Processing Facility. GAO-14-79R. Washington, D.C.: October 25, 2013. Nuclear Weapons: Factors Leading to Cost Increases with the Uranium Processing Facility. GAO-13-686R. Washington, D.C.: July 12, 2013. Nuclear Weapons: National Nuclear Security Administration’s Plans for Its Uranium Processing Facility Should Better Reflect Funding Estimates and Technology Readiness. GAO-11-103. Washington, D.C.: November 19, 2010.
Why GAO Did This Study NNSA is responsible for ensuring a sustainable supply of strategic materials critical to the nation's nuclear security missions, as well as the capability to process these materials. NNSA estimates that strategic materials management activities will cost about $7.7 billion over the next 5 years. The House Report accompanying H.R. 4909, a bill for the National Defense Authorization Act for Fiscal Year 2017, included a provision for GAO to review NNSA's management of its strategic materials programs. This report examines (1) the extent to which NNSA has, for these programs, defined requirements, including program manager roles and responsibilities, and (2) the progress of NNSA's implementation of those program requirements. GAO reviewed NNSA program management policies and documents related to its strategic materials program manager positions and interviewed NNSA officials and program managers. What GAO Found The Department of Energy's (DOE) National Nuclear Security Administration (NNSA) manages strategic materials programs for uranium, plutonium, tritium, and lithium—materials that are critical to national security. NNSA has set program requirements that each of the programs must follow and has established the roles and responsibilities of the program managers. NNSA has defined these requirements in two documents: Program Execution Instruction (2016). Outlines requirements for program management documents, such as a program plan, cost and schedule estimates, and an integrated master schedule that includes the entire scope of work for successful execution. Program Management Policy (2017). Outlines the program managers' authority and requirements for managing the strategic materials programs, such as managing risk, and requires each program to develop documents, such as a mission strategy and technology development plan. NNSA officials reported that the agency is making progress implementing the requirements outlined for each of the strategic materials programs, although some of the programs are farther along than others. For example: The uranium and domestic uranium enrichment programs established in 2014 are the furthest along and have developed the documents needed to meet strategic program requirements. The plutonium program has met some of the requirements, such as developing a program plan, work breakdown structure, and decision analysis, but does not yet have an integrated master schedule. The tritium program met the requirements during the course of GAO's review. The lithium program, which is the newest, has made the least amount of progress and to date has developed only a mission strategy, a mission requirements matrix, and a technology development plan. According to NNSA officials, shortage of staff assigned to the strategic materials programs has been the primary reason hampering progress in implementing the program requirements. For example, a lithium program manager has not yet been assigned, and all the other programs have identified the need for additional staff beyond the one or two staff currently assigned to each. According to officials, competing agency priorities and perceived staffing limits are the primary impediments to assigning more staff to these programs. However, GAO also found that NNSA has not determined the critical skills and competencies needed for these programs. GAO's prior work has identified certain activities or practices that can help an agency strategically manage its human capital. These activities include determining the critical skills and competencies that will be needed to achieve the program's mission and developing strategies to address gaps in the number, deployment, and alignment of staff needed. By determining the critical skills and competencies needed for the strategic materials programs and using this determination to develop strategies to address any gaps in the number, deployment, and alignment of program staff, NNSA may have the information it needs to better justify increased staffing levels for the programs. What GAO Recommends GAO recommends that NNSA determine the critical skills and competencies that will be needed for the strategic materials programs and use this determination to develop strategies for addressing any gaps related to the number, deployment, and alignment of program staff. NNSA agreed with GAO's recommendation.
gao_GAO-18-354
gao_GAO-18-354_0
Background DHS’s Homeland Security Grant Program The federal government has provided financial assistance to public and private stakeholders for preparedness activities through various grant programs administered by DHS through its component agency, FEMA. Through these grant programs, DHS has sought to enhance the capacity of states, localities, and other entities, such as ports or transit agencies, to prevent, prepare for, protect against, respond to, and recover from, and mitigate a natural or manmade disaster, including terrorist incidents. Two of the largest preparedness grant programs are the SHSP and UASI grant programs. SHSP grants provide federal assistance to support states’ implementation of homeland security strategies to address the identified planning, organization, equipment, training, and exercise needs at the state and local levels to prevent, prepare for, protect against, and respond to acts of terrorism. SHSP grants are annually awarded to all the nation’s 56 states and territories. SHSP grant awards are calculated in two parts. All states and territories are to receive a minimum grant amount required by law, based on a percentage of the total amount of SHSP and UASI appropriations in a given fiscal year. The remaining award amounts are based on FEMA’s risk-based grant assessment model. UASI grants provide federal assistance to address the unique needs of high-threat, high-density urban areas, and assists the areas in building an enhanced and sustainable capacity to prevent, prepare for, protect against, respond to acts of terrorism. Since 2015, Congress has instructed through the Explanatory Statements accompanying the annual DHS Appropriations Acts that the UASI grants should be awarded to urban areas that reflect up to 85 percent of nationwide risk. For the UASI program, FEMA uses the risk-based grant assessment model each year to identify those urban areas that will be eligible to receive funding. Annual funding for the SHSP and UASI programs have generally declined over the period of fiscal years 2008 through 2018, but have remained consistent since fiscal year 2016. Figure 1 shows the changes to SHSP and UASI programs’ annual funding during this period. For example, annual funding for SHSP decreased from about $861 million in fiscal year 2008, to $402 million in fiscal year 2018. During this same period, annual funding for UASI also declined, from about $782 million in fiscal year 2008 to $580 million in fiscal year 2018. However, annual funding for the UASI program has been higher than the SHSP program since fiscal year 2010. FEMA’s Risk-based Grant Assessment Model for Distributing Funding Awards Risk=Threat x Vulnerability x Consequence Threat–A natural or man-made occurrence, individual, entity, or action that has or indicates the potential to harm life, information, operations, and/ or property Vulnerability–Physical feature operational attribute that renders an entity, asset, system, network, or geographic area open to exploitation or susceptible to a given hazard. Consequence–Effect of an event, incident, or occurrence, commonly measured in four ways: human, economic, mission, and psychological, but may also include other factors such as impact on the environment. FEMA’s risk-based grant assessment model uses three variables: Threat, Vulnerability, and Consequence. The purpose of this model is to apply a risk management process to provide a structured means of making informed trade-offs and choices about how to use finite resources effectively, and monitoring the effect of those choices. Specifically, inherent “uncertainty” is associated with any effort to develop a risk model such as assessing the risk of terrorist attacks, and thus, requires the application of policy judgments and analytic assumptions. The effect that uncertainty has on the results of the risk model can be especially important if the model produces materially different results in response to even small changes in assumptions, often referred to as the “sensitivity” or “robustness” of a model’s assumptions and results. As we reported in June 2008, FEMA’s risk-based grant methodology and its continuous improvement efforts in estimating risk were part of a reasonable process to assist in determining SHSP and UASI grant allocations. For example, the risk-based grant assessment model used from fiscal year 2001 through 2003 largely relied on measures of population to determine the relative risk of potential grantees, and evolved to measuring risk as the sum of threat, critical infrastructure and population density calculations in fiscal years 2004 and 2005. Further, the fiscal year 2006 process introduced a risk assessment model that included measures of Threat, Vulnerability and Consequences. In June 2008, we reported that the way the risk-based grant assessment model measured vulnerability across states and urban areas was limited. We found that the model considered all states and urban areas equally vulnerable to a successful attack, and as a result, the final risk scores were determined exclusively by the Threat and Consequence scores. Specifically, the risk model did not measure vulnerability for each state and urban area; rather it assigned a vulnerability score of 1.0 to every state and urban area. We recommended that DHS and FEMA formulate a methodology to measure variations in vulnerability across states and urban areas. DHS components concurred with our recommendation to measure vulnerability in a way that captures variations across states and urban areas and apply this measure in future iterations of FEMA’s model. In August 2011, FEMA reported that the agency, in coordination with other DHS components, established a Vulnerability Index for the fiscal year 2011 risk-based grant assessment model to better capture the risk to states and urban areas, thereby addressing our recommendation. Other Reviews of FEMA’s Risk Methodology DHS and the National Research Council (NRC) have also performed reviews of FEMA’s risk assessment methodologies, providing their own conclusions and recommendations, since our 2008 review. For example, in 2010, the NRC reported that FEMA should strengthen its scientific practices, such as documentation, analyses to determine how changes to a model could affect its results, and peer review by technical experts external to DHS, in order to further develop an understanding of the uncertainties in its terrorism-related risk analyses. Additionally, in 2016, the Homeland Security Advisory Council reported that processes by which FEMA uses to assess risk should be made more inclusive, comprehensive and effective. The Homeland Security Advisory Council recommended the following actions to strengthen this process: FEMA should continue to send risk profiles to states and urban areas to promote timely and meaningful feedback, and enable FEMA to evaluate recommended adjustments. Before each year’s budget submission, FEMA should discuss with congressional appropriators the current grant allocation mechanism. We discuss FEMA’s progress in implementing these recommendations later in this report. Various Factors Affected SHSP and UASI Grant Allocations to States and Urban Areas From Fiscal Years 2008 Through 2018 SHSP Allocations Reflect Both a State’s Relative Risk Score and the Minimum Allocation by Law While all states and territories receive minimum SHSP program grant allocations by law, the risk-based grant assessment model also informs the grant allocation of the remaining funds to each state. However, for a majority of states each year, their SHSP grant awards are primarily based on a legal minimum amount. For example, in fiscal year 2012, 34 states, like New Mexico, were awarded $2,801,000, which included $2,745,000 based on the minimum amount by law, and $56,000 was based on its risk level. By contrast, New York was one of the high-risk states based on the risk model. For that same fiscal year (2012) New York received a total of $55,610,000, which included $2,745,000 based on the minimum amount by law, plus $52,865,000 based on its risk level. Over the period from fiscal years 2008 through 2018, the number of low- risk states whose SHSP grant awards were primarily based on the legal minimum amount had varied from year to year, from 19 states in fiscal year 2008, to 37 states in fiscal year 2018, as shown in table 1. In addition, from fiscal year 2008 through fiscal year 2018, there was a decrease in the percent of total SHSP funds awarded to states and territories based on FEMA’s risk model. The percent of total SHSP funding awarded to states and territories based on FEMA’s model ranged from a high of 63 percent in fiscal year 2009 (about $536 million of the $851 million of total SHSP funds), to 51 percent (about $149 million of $294 million of total SHSP funds) for fiscal year 2012. For fiscal year 2018, the total SHSP funds awarded to states and territories based on the risk-based grant assessment model was 55 percent—about $220 million of $402 million. For specific details on SHSP grant allocations for fiscal years 2008 through 2018 by states and territories, see appendix I, table 4. UASI Grantee Eligibility and Allocations Reflect Results from FEMA’s Risk- Based Grant Assessment Model The UASI program uses FEMA’s risk-based grant assessment model to identify which of the 100 of the nation’s largest urban areas are eligible for grant awards in a particular fiscal year. Then, FEMA’s risk model also helps inform DHS leadership’s decisions on the final funding amounts for each grantee, according to FEMA officials. Specifically, FEMA annually assesses the risk of the 100 most populous metropolitan statistical areas—a geographical region with a relatively high population density at its core and close economic ties throughout the area—as defined by the Office of Management and Budget, in determining the eligible urban areas. From these 100 eligible urban areas, the risk-based grant assessment model identifies those urban areas that reflect recent congressional intent that up to eighty-five percent (85%) of nationwide risk is funded each year. Those urban areas below this 85 percent threshold are ineligible for UASI grant awards in that fiscal year, according to FEMA officials. From fiscal years 2008 through 2018, the number of UASI grantees has remained relatively stable since fiscal year 2011. As figure 2 shows, the annual number of grantees has fluctuated from fiscal years 2008 through 2018, ranging from 60 to 64 grantees during fiscal years 2008, 2009 and 2010. However, since fiscal year 2011 the number of UASI grantees has averaged 31 urban areas, with a high of 39 urban areas in fiscal year 2014 and a low of 25 urban areas in fiscal year 2013. For fiscal year 2018, 32 urban areas were UASI grantees. For additional details on UASI grant awards for fiscal years 2008 through 2018 by urban areas, see appendix I, table 5. Because the UASI grant program is required by annual congressional guidance to fund only those urban areas that comprise up to 85 percent of risk nationally, this eligibility cut off can result in different urban areas being eligible from one year to the next. Specifically, as we demonstrated in June 2008, the variation of risk across urban areas takes on the distribution curve illustrated in figure 3. The few urban areas with the highest relative risk score are represented along the steep part of the relative risk curve. For example, those urban areas receiving the highest awards, informed by their risk scores and ranks, are generally the same each fiscal year: New York City, Los Angeles, and Chicago, as seen in table 2. Those urban areas that have less relative risk are represented along the flat section of the curve. There are urban areas with less risk that may not fall within the 85 percent of risk nationally during a specific year and thus would be ineligible to receive UASI funding during that year. Table 3 lists the lowest-funded urban areas for the last 5 fiscal years, based on our analysis of the funding amounts each received within each fiscal year. For example, during the period of fiscal year 2008 through fiscal year 2018 San Antonio, Texas, and Hampton Roads, Virginia only received awards in fiscal years 2008, 2009, 2014, 2017, and 2018. In addition to changes to urban areas’ risk ranking from one year to the next, the amount that an urban area received of the total amount of UASI funds in a given year can change. FEMA has established a process for developing grant award funding options based on the results of the risk- based grant assessment model. These funding options are provided to the Secretary of Homeland Security for consideration and final approval. According to FEMA officials, the options may vary each year based on DHS leadership’s priorities and concerns at the time; however, all options represent only those eligible grantees that represent up to 85 percent of the nation’s risk, as determined by the risk-based grant assessment model. In fiscal year 2013, FEMA shifted its UASI grant funding to a process referred to as “funding bands.” In fiscal year 2018, for example, UASI grantees such as Orlando, Florida; Hampton Roads, Virginia; and San Antonio, Texas each received a $1.5 million UASI grant, whereas a grouping of UASI grantees that included Sacramento, California; Pittsburgh, Pennsylvania; and Portland, Oregon each received $2.5 million. According to FEMA officials, grouping jurisdictions with similar risk scores into funding bands is an effort to stabilize and retain grantees’ funding levels over multiple years, as annual UASI grants will fund projects that are multiyear investments and carried out over a 24 to 36-month performance period. For example, if one jurisdiction increased by four ranks and another jurisdiction in the same group dropped six ranks, the two jurisdictions would stay in the same funding band if the overall risk scores remained close together. The purpose of the funding bands is to ensure that some consistency in funding exists for jurisdictions, given minor changes in the relative risk ranking. FEMA looks at the natural risk breaks and historical grant allocation data for each year. For example, each year FEMA presents for consideration by DHS leadership the historical funding and the number of urban areas that have been placed in specific funding bands in prior grant years, if any, and the differences between the relative risk scores in the current fiscal year. According to FEMA officials, the last few grant years had produced similar funding bands, which are subject to change depending on DHS leadership’s final decisions. FEMA Has Improved Its Risk-based Grant Assessment Model, but Additional Steps Could Further Strengthen Its Model FEMA Has Taken a Number of Steps to Improve the Risk-based Grant Assessment Model for Allocating SHSP and UASI Grants Since 2008, FEMA has taken a number of steps to assess and improve its risk-based grant assessment model for allocating grants based on past reviews, our prior recommendations, and various changes related to evolving terrorist threats and real-world scenarios. For example, FEMA added a Vulnerability Index to its risk model in 2011 in response to our 2008 recommendation. Most recently, for fiscal year 2018, FEMA has included a “soft target index.” According to FEMA officials, this index was added to account for the current threat for areas where crowds congregate. Figure 4 illustrates the timeline of FEMA changes to the risk- based assessment model and prior assessments. Figure 5 depicts the risk-based grant assessment model used for fiscal year 2018 SHSP and UASI grant awards. Figure 6 depicts the changes in the Threat, Vulnerability, and Consequence indexes used in the risk-based grant assessments model for fiscal year 2008, compared to 2018. As we noted above, the 2008 risk model did not measure Vulnerability for each state and urban area, and risk scores were essentially determined by Threat and Consequences indexes. Changes to the Consequence Index can have the most impact on the relative risk scores because of the weight of this index (50 percent), relative to the weights for the Threat and Vulnerability indexes. Further, the weight for population within the Consequence Index represented 30 percent of the total fiscal year 2018 risk model value. As a result, the weight for the population index was greater than the weights of either the Threat Index or Vulnerability Index, each 25 percent. FEMA has decreased the weight for the population index over time, from 40 percent in 2008 to 30 in 2011, where it has remained consistent through 2018. For fiscal year 2018, FEMA modified how the population index was calculated within the Consequence Index to better account for attacks staged by individuals, so-called lone wolves. FEMA did so, in part, by reducing the importance of population density within the population index. In past risk models, the population index had favored high-density, high- rise urban areas, commensurate with building destruction scenarios — the 9/11-style attack scenarios that focused on large building destruction events, according to FEMA officials. The 2018 change to cap population density in the population index reduces the impact those extremely-dense population areas have in the methodology, according to FEMA officials. The other measures used to make up the Consequence Index remain relatively unchanged since our review in 2008, although FEMA has renamed the indexes. Vulnerability Index As explained earlier, FEMA added a Vulnerability Index to its risk-based grant assessment model in 2011, in response to our 2008 recommendation. According to FEMA officials, the Vulnerability Index helps support what DHS is trying to protect, primarily the protection of citizens and critical infrastructure. For example, the Vulnerability Index includes a measure designed to assess the extent that certain types of national critical infrastructure assets may be considered for possible attack. This Targeted Infrastructure Index measure uses actionable intelligence on types of critical infrastructure targets, such as aviation, mass transit and commuter rail. FEMA works with DHS’s National Protection and Programs Directorate to match its critical infrastructure dataset to actionable intelligence from DHS’s Office of Intelligence & Analysis to compile this measure. Vulnerability Index Designed to measure the likelihood of a successful attack in a state or urban area, based on a) intelligence information of those critical infrastructure assets identified by foreign or domestic terrorists; b) the extent of international borders entries (land, sea and air) located in a state or urban area, and c) special events where crowds congregate and are susceptible to homegrown extremism and lone wolf attacks. For the fiscal year 2018 grant, FEMA has included a “soft target index.” According to FEMA officials, this index was added to account for the current threat for areas where crowds congregate. Based on previous feedback received through this process, FEMA updated the fiscal year 2018 risk methodology to better account for the nation’s current threat environment. The soft target index is composed of two new data elements: Visitors—domestic and international—using the same data used in the calculation of the Population Index; and Special events measure—uses Special Event Assessment Rating data from DHS Office of Operations Coordination to identify large events that are state and local events that may require federal assistance. Examples of such events include the Super Bowl, the Boston Marathon and New Year’s Eve in Times Square. In fiscal year 2018, FEMA added a new “isolation” measure to account for the challenges of response for those states, territories, and urban areas outside the contiguous United States, who rely on prompt mutual aid from neighboring jurisdictions. According to FEMA officials, the isolation data element was included as a response to challenges the agency witnessed as a result of the 2017 Hurricane season, specifically the unique challenges of distant U.S. territories receiving timely mutual aid from other states. For example, if Hawaii, Guam or American Samoa were attacked, there would be little to no outside help for a number of days. As a result, FEMA modified the fiscal year 2018 Border Crossings data element weight, which was dropped from 6 percent to 4 percent, in order to establish a 2 percent weight for the isolation measure. Threat Index The weight of the Threat Index was raised from 20 percent to 30 percent in fiscal year 2011, and has been modified again for fiscal year 2018. Specifically, according to FEMA and DHS officials, DHS leadership made a policy decision to reduce the Threat Index’s weight from 30 percent in 2017, to 25 percent in 2018, due to the change in current threat environment, since Congress directed FEMA in the Explanatory Statement accompanying the FY 2017 DHS Appropriations Act to review the risk model to account for this changing threat environment. FEMA officials further stated that they assumed, as domestic terrorism and soft targets are considered to be prevalent nationwide and pose more of a challenge in identifying the source of actionable threats. FEMA officials stated that this modification to the Threat Index better reflects real-world scenarios. Since fiscal year 2012, FEMA has included information on domestic terrorism as well as international terrorism in its Threat Index. According to DHS officials, home grown extremism is also a likely threat, often through lone wolf attacks. DHS officials decided to assign all urban areas a minimum threat score to reflect the fact that all areas have some level of threat. According to DHS officials, the addition of a domestic terror threat measure resulted in a decrease in the variation of threat scores across states and urban areas. According to DHS officials, lone wolf attacks are difficult to determine who the actors may be, or when and where they will attack. Stakeholder Feedback FEMA annually transmits risk profile information to states and urban areas to promote timely and meaningful feedback. According to FEMA officials, draft risk profiles are sent to all 56 states and territories and 100 eligible urban areas closely after the enactment of DHS’s annual appropriations. States and urban areas are given a 2-week period prior to the release of the Notices of Funding Opportunity to review their draft risk profiles and provide FEMA any comments or data corrections that should be considered. According to FEMA officials, it encourages and welcomes stakeholders to make suggestions for new or different data sets for the subsequent fiscal year's risk assessment at any time during the year convenient to the stakeholder. FEMA also conducts webinars during this period to can explain the risk profiles in detail, as well as discuss any updates to data sets and/or any enhancements to the risk assessment. This will often result in feedback on data elements and the methodology of the risk-based grant assessment model, according to FEMA officials. According to FEMA officials, this feedback process has been used to help guide FEMA’s consideration of enhancements to the risk-based grant assessment model. For example, FEMA officials noted that this process helped them in their efforts to develop the soft targets index into the 2018 risk model. FEMA Does Not Fully Make Use of Recognized Scientific Practices in Maintaining Its Risk Assessment Model In 2010, the National Research Council (NRC) recommended that incorporating scientific practices can provide decision makers a further understanding of the effects of its policy judgments and assumptions—i.e. addressing uncertainties—in its terrorism-related risk analyses. The NRC identified “good scientific practice” for model-based work. Specifically, the NRC recommended that detailed documentation for all risk models, including rigorous mathematical formulations, be implemented department-wide. Additionally, the NRC recommended that all risk models undergo verification and validation—or a sensitivity analysis at the least—of its risk-based grant assessment model. Finally, the NRC recommended that FEMA should undertake an external peer review by technical experts outside of DHS, and review its risk-informed formulas in order to identify issues such as logic flaws, evaluate the ramifications of the choices of weightings and parameters, and improve the risk model’s transparency. However, FEMA has not fully adopted these scientific practices for its risk-based grant assessment model. Documentation: FEMA documentation on the sources of data used for the model’s calculations does not include information that would enable a reviewer to understand the underlying assumptions that form the basis for its risk-based grant assessment model—such as the size of the weights assigned to Threat, Vulnerability, and Consequence, or the justification for changes to these weights from one year to the next. FEMA officials stated that they focus their limited time and resources on developing the executive summary-level materials that DHS leadership will use to determine final grant eligibility and grant allocation amounts. Also, to a lesser extent, FEMA officials said they rely on the expertise of the subject matter experts from DHS’s Office of Intelligence and Analysis, and DHS’s National Protection and Preparedness Division’s Office of Cyber and Infrastructure Analysis, parts of DHS that contribute to the annual risk assessment process. In April 2018, we identified documentation as one of the key methodological elements to the baseline structure of an economic analysis. Specifically, the elements include that the analysis is clearly written with a plain language summary, has clearly labeled tables that describe the data used and results, and has a conclusion that is consistent with these results. The analysis cites all sources used and documents that it is based on the best available economic information. The analysis documents that it complies with a robust quality assurance process and, where applicable, the Information Quality Act, and should disclose the use and contributions of contractors and outside consultants. FEMA officials agreed with our analysis of FEMA’s supporting documentation, and officials stated that maintaining additional documentation could further assist reviewers. Documenting how subject matter expert assumptions are made would help FEMA increase the transparency of the model for key internal and external stakeholders. In-Depth Analyses: Similarly, we could not determine whether FEMA sufficiently performed all the analyses of the model’s sensitivity needed to determine how changes to its risk-based grant assessment model could affect the resulting risk scores. FEMA officials stated that they have only analyzed the effect of a data element when it has been added to the model (e.g.: the Soft Target Index in 2018). Further, FEMA officials were unable to provide us with documentation on their sensitivity analyses processes or their results. DHS’s Risk Lexicon states that sensitivity analysis can be used to examine how individual variables can affect the outputs of risk assessment methodologies. In addition, OMB Circular A-94 recommends that the outcomes from a risk model should be analyzed to determine how sensitive such outcomes are to changes in the model’s assumptions. The assumptions that deserve the most attention will depend on the dominant elements and the areas of greatest uncertainty of the program being analyzed. In addition, research in the actuarial sciences also states that sensitivity analysis “is of fundamental importance to risk analysts, especially in the presence of complex computational models with uncertain inputs.” As we stated earlier, understanding the extent that uncertainty has on the results of the model can be especially important if the model produces materially different results in response to even small changes in assumptions—often referred to as the “sensitivity” or “robustness” of a model’s assumptions and results. We have reported on FEMA’s risk- based grant assessment model in June 2008 and March 2013, where we found grant years when the risk model was sensitive to even small changes. For example, we noted that a potential increase or decrease in a measure would have resulted in one urban area displacing the eligibility of another, thereby potentially shifting funding as well. FEMA officials stated that they focus their limited time and resources on developing the executive summary-level materials that DHS leadership will use to determine final grant eligibility and grant allocation amounts. FEMA officials agreed that they could better document the steps used in their analyses across all the model’s measures and weights so that a complete understanding of potential impacts are documented and can be made available to leadership when making decisions about changes. FEMA’s implementation of sensitivity analyses could help the agency to assess changes to the risk-based grant assessment model including the introduction of new data elements into Threat, Vulnerability, and Consequence indexes, the modifications to how existing data elements are calculated, and the changing of the weights assigned to the Threat, Vulnerability, and Consequence indexes. Further, FEMA’s implementation of sensitivity analyses has the ability to show decision makers the impact or predicted impact of adjustments to FEMA’s risk- based grant assessment model, including with potential shifts in funding towards or away from certain grantees. Use of External Peer Review: FEMA has not subjected its risk-based grant assessment model to a peer review by independent, external technical experts, as previously recommended in 2010 by the NRC. According to FEMA officials, its risk assessment methodology has undergone comprehensive internal reconsideration over time to better reflect real-world scenarios, but such reviews have not included external peer reviews. FEMA officials stated that its risk-based grant assessment model has gone through past reviews including a review as part of DHS’s quadrennial review in 2014, and the model is reviewed by internal subject matter experts from DHS’s Office of Intelligence and Analysis, and DHS’s National Protection and Preparedness Division’s Office of Cyber and Infrastructure Analysis as part of the annual risk assessment process. FEMA officials stated that the agency is exploring the possibility of participating in a DHS collaborative group to internally review and provide feedback on the model’s underlying assumptions and methods. Such a group could review the underlying components of the current risk-based grant assessment model and suggest improvements, as well as present and evaluate other risk assessment theories and approaches. FEMA officials told us they have encountered time and resources constraints on establishing an external peer review process. As we have previously reported, independent external peer reviews can increase the probability of success by improving the technical quality of projects and the credibility of the decision-making process, and provide reasonable assurance that the agency’s approach is reproducible and defensible. In addition, in December 2004, OMB issued the memorandum “Final Information Quality Bulletin for Peer Review” which established government-wide guidance aimed at enhancing the practice of peer review of government science documents. OMB noted that peer review can increase the quality and credibility of the scientific information generated across the federal government, which was an effort to improve the quality of the scientific information upon which policy decisions are based. OMB also noted that, while peer review may take a variety of forms, agencies will need to consider at least the following issues when coordinating an external peer review: individual versus panel review; timing; scope of the review; selection of reviewers; disclosure and attribution; public participation; disposition of reviewer comments; and adequacy of prior peer review. These scientific processes are designed to help decision makers better understand the impact or predicted impact of risk management alternatives, and provide greater confidence in the reliability of the risk assessment model’s results. Full implementation of these processes better position FEMA to provide further assurances that their risk-based grant assessment model and grant allocation approaches are reasonable, of high-quality, and credible. Conclusions Given that risk management has been endorsed by the federal government as a way to direct finite resources to states and those urban areas that are most at risk of terrorist attack, it is important that FEMA’s risk-based grant assessment model supports the application of policy judgments and analytic assumptions in the model’s role of allocating those limited resources. Decreased funding levels for SHSP and UASI grant programs have increased the importance of using risk management techniques to more effectively target finite federal dollars. DHS and FEMA have strengthened its risk-based grant assessment model for allocating grants, taking into account analysis and recommendations from a variety of reviews. These improvements include the addition of a Vulnerability Index and modifications to the Threat Index. We have identified opportunities where FEMA could strengthen its scientific practices. First, documenting the model’s underlying assumptions and the results of sensitivity analysis can assist decision makers in better understanding the predicted impact of risk management alternatives. Second, expanding the use of sensitivity analysis could further enhance the model. Developing a greater understanding of the how uncertainty affects its risk-based grant assessment model’s results helps achieve the objectives of risk management. Third, coordinating an independent external peer review of the methodology of its risk-based grant assessment model would better position the agency to provide reasonable assurance that FEMA’s risk model and grant allocation approach that FEMA uses for its SHSP and UASI programs are reasonable, of high-quality, and credible. Applying such scientific practices could assist FEMA in further strengthening its risk-based grant assessment model. Recommendations for Executive Action We are making the following three recommendations to FEMA. The FEMA Administrator should fully document the underlying assumptions and justifications that form the basis of the risk-based grant assessment model, such as the size of the weights assigned to Threat, Vulnerability, and Consequence, or the justification for changes to these weights from one year to the next. The FEMA Administrator should perform sensitivity analyses to verify how changes to the risk-based grant assessment model could affect the resulting risk scores, and document the results. The FEMA Administrator should take steps to coordinate an independent, external peer review of its risk-based grant assessment model. Agency Comments and Our Evaluation We provided a draft of this product to the FEMA and DHS for comment. In its comments, reproduced in appendix II, FEMA generally concurred with our findings and three recommendations. In FEMA’s concurrence to our first recommendation that the agency fully document the underlying assumptions and justifications that form the basis of the risk-based grant assessment model, FEMA requested that GAO consider this recommendation resolved and closed as implemented. As part of FEMA’s response, they reiterate their process of providing draft Risk Profiles to all 100 urban areas and 56 states and territories and their annual communications to Congress on how FEMA calculated risk and computed grant awards. We recognized FEMA’s stakeholder feedback efforts in this report. However, as we noted, FEMA’s documentation on the sources of data used for the model’s calculations does not include information that would enable a reviewer to understand the underlying assumptions that form the basis for its risk-based grant assessment model. Further, as stated earlier, documentation is one of the key methodological elements to the baseline structure of this type of analysis, documenting that it complies with a robust quality assurance process and, where applicable, the Information Quality Act, and should disclose the use and contributions of contractors and outside consultants. In order to fully implement this recommendation, documenting how subject matter expert assumptions are made would help FEMA increase the transparency of the model for key internal and external stakeholders, and will further support the efforts of an independent external peer review of FEMA’s risk-based assessment model. Regarding the second recommendation, FEMA concurred, stating that the agency will expand the use of sensitivity analysis to review the entire risk methodology, and will also document these results for leadership review, as appropriate. Finally, regarding the third recommendation, FEMA concurred, stating that they will coordinate an independent external peer review and develop a detailed written response to leadership for further appropriate action. FEMA and DHS also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Homeland Security, and other interested parties. This report will also be available at no charge on our Web site at http://www.gao.gov. Should you or your staff have any questions concerning this report, please contact me at (202) 512-8777 or CurrieC@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Appendix I: Grant Funding and Awards for State Homeland Security Grant Program (SHSP), and the Urban Area Security Initiative (UASI) Grant Program for Fiscal Years 2008 Through 2018 Fiscal Years Mississippi Total Award Amount above legal minimum Fiscal Years West Virginia Total Award Amount above legal minimum Appendix II: Comments from the Department of Homeland Security Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Chris P. Currie, at (202) 512-8777 or CurrieC@gao.gov. Staff Acknowledgments In addition, key contributors to this report were Aditi Archer, Chris Keisling, Assistant Director, John Vocino, Analyst-in-Charge, Chuck Bausell, Dominick Dale, Dorian Dunbar, Eric Hauswirth, Serena Lo, Heidi Nielson, and Hadley Nobles.
Why GAO Did This Study FEMA, a component of DHS, provides preparedness grants to state, local, tribal, and territorial governments to help prepare for, prevent, protect against, respond to, recover from and mitigate terrorist attacks or other disasters. SHSP grants fund the nation's 56 states and territories, while UASI grants fund eligible urban areas. Grant allocations have been based, in part, on FEMA's risk-based grant assessment model, with states and urban areas deemed to be at higher risk receiving more grant dollars than those deemed at lower risk. Since 2008, GAO and others have assessed the model and made recommendations to strengthen it. This report 1) describes SHSP and UASI grant awards during fiscal years 2008 through 2018, and factors affecting grant distributions; and 2) examines the steps that FEMA has taken to strengthen its risk assessment model for allocating preparedness grants, and any additional opportunities to improve the model. GAO analyzed the information in FEMA's model, and data on SHSP and UASI grant awards for fiscal years 2008 through 2018. GAO also interviewed FEMA and DHS officials and collected documents. What GAO Found GAO found that various factors affected Federal Emergency Management Agency (FEMA) State Homeland Security Program (SHSP) and Urban Area Security Initiative (UASI) grant awards from fiscal year 2008 through 2018. SHSP grant awards to states were based on two factors—(1) minimum amounts set in law each year, and (2) FEMA's risk model. For example, in fiscal year 2012, each state was to receive a minimum of approximately $2.74 million, with each state receiving additional funds based on its relative risk score. Conversely, UASI grant awards are made based on its FEMA's risk-based grant assessment model, which ranks each urban area relative to others in that year, and Department of Homeland Security (DHS) leadership decisions on how funding should be allocated. From fiscal year 2008 through 2018, the number of USAI grantees varied from year to year (see figure below). Since 2008, FEMA has taken steps to strengthen its risk-based grant assessment model, but has not incorporated additional scientific practices into its model. For example, in 2011 FEMA included more information in its model on potential targets and their vulnerability in each state and urban area, addressing a prior GAO recommendation. More recently in 2018, FEMA added additional factors to better assess vulnerability in each state and urban area, such as the number of special events where large crowds gather and soft targets susceptible to lone wolf attacks, among other things. However, GAO found that FEMA does not fully utilize scientific practices recognized by the National Research Council and the Office of Management and Budget as best practices. Specifically, FEMA did not fully document its model's underlying assumptions, such as the weights in its model or the justification for changes to these weights. FEMA also did not perform the level of analysis needed to determine how changes to its model could affect the resulting risk scores. Finally, FEMA has not coordinated an independent external peer review of its model. Applying such scientific practices could assist FEMA in further strengthening its model. What GAO Recommends GAO is making three recommendations to FEMA to further strengthen its risk-based grant assessment model by (1) fully documenting the model's assumptions and justifications, (2) performing additional in-depth analyses, and (3) coordinating an external peer review. FEMA concurred with our recommendations.
gao_GAO-18-520T
gao_GAO-18-520T_0
Background Federal agencies and our nation’s critical infrastructures—such as energy, transportation systems, communications networks, and financial services—are dependent on computerized (cyber) information systems and electronic data to process, maintain, and report essential information, and to operate and control physical processes. Virtually all federal operations are supported by computer systems and electronic data, and agencies would find it difficult, if not impossible, to carry out their missions and account for their resources without these information assets. Hence, the security of these systems and data is vital to public confidence and the nation’s safety, prosperity, and well-being. Ineffective security controls to protect these systems and data could have a significant impact on a broad array of government operations and assets. Yet, computer networks and systems used by federal agencies are often riddled with security vulnerabilities—both known and unknown. These systems are often interconnected with other internal and external systems and networks, including the Internet, thereby increasing the number of avenues of attack and expanding their attack surface. Furthermore, safeguarding federal computer systems has been a long- standing concern. This year marks the 21st anniversary of when GAO first designated information security as a government-wide high-risk area in 1997. We expanded this high-risk area to include safeguarding the systems supporting our nation’s critical infrastructure in 2003 and protecting the privacy of personally identifiable information in 2015. Over the last several years, we have made about 2,500 recommendations to agencies aimed at improving the security of federal systems and information. These recommendations identified actions for agencies to take to strengthen their information security programs and technical controls over their computer networks and systems. Nevertheless, many agencies continue to be challenged in safeguarding their information systems and information, in part because they have not implemented many of these recommendations. As of March 2018, about 885 of our prior information security-related recommendations had not been implemented. Federal Law and Policy Provide DHS with Broad Authorities to Improve and Promote Cybersecurity DHS has broad authorities to improve and promote cybersecurity of federal and private-sector networks. The federal laws and policies that underpin these authorities include the following: The Federal Information Security Modernization Act (FISMA) of 2014 clarified and expanded DHS’s responsibilities for assisting with the implementation of, and overseeing, information security at federal agencies. These responsibilities include requirements to: develop, issue, and oversee agencies’ implementation of binding operational directives to agencies, including directives for incident reporting, contents of annual agency reports, and other operational requirements; monitor agencies’ implementation of information security policies provide operational and technical assistance to agencies, including by operating the federal information security incident center, deploying technology to continuously diagnose and mitigate threats, and conducting threat and vulnerability assessments of systems. Act of 2014, among other things, requires DHS to assess its cybersecurity workforce. In this regard, the Secretary of Homeland Security is to identify all positions in DHS that perform cybersecurity functions and to identify cybersecurity work categories and specialty areas of critical need. The National Cybersecurity Protection Act of 2014 codified the role of the National Cybersecurity and Communications Integration Center (NCCIC)—a center established by DHS in 2009—as the federal civilian interface for sharing information concerning cybersecurity risks, incidents, analysis, and warnings to federal and non-federal entities, including owners and operators of information systems supporting critical infrastructure. The Cybersecurity Act of 2015, among other things, sets forth authority for enhancing the sharing of cybersecurity-related information among federal and non-federal entities. The act gives DHS’s NCCIC responsibility for implementing this information sharing authority. The act also requires DHS to: Jointly develop with other specified agencies and submit to Congress, procedures for sharing federal cybersecurity threat information and defensive measures with federal and non-federal entities. Deploy, operate, and maintain capabilities to prevent and detect cybersecurity risks in network traffic traveling to or from an agency’s information system. DHS is to make these capabilities available for use by any agency. In addition, the act requires DHS to improve intrusion detection and prevention capabilities, as appropriate, by regularly deploying new technologies and modifying existing technologies. Long-standing federal policy as promulgated by a presidential policy directive, executive orders, and the National Infrastructure Protection Plan have designated DHS as a lead federal agency for coordinating, assisting, and sharing information with the private-sector to protect critical infrastructure from cyber threats. DHS Has Acted to Improve and Promote the Cybersecurity of Federal and Private- Sector Computer Systems, but Further Improvements Are Needed We have reviewed several federal programs and activities implemented by DHS that are intended to mitigate cybersecurity risk for the computer systems and networks supporting federal operations and our nation’s critical infrastructure. These programs and activities include deploying the National Cybersecurity Protection System, providing continuous diagnostic and mitigation services, issuing binding operational directives, sharing information through the National Cybersecurity and Communications Integration Center, promoting adoption of a cybersecurity framework, and assisting private-sector partners with cyber risk mitigation activities. We also examined DHS’s efforts to assess its cybersecurity workforce. DHS has made important progress in implementing these programs and activities. However, the department needs to take additional actions to ensure that it successfully mitigates cybersecurity risks on federal and private-sector computer systems and networks. DHS Needs to Enhance Capabilities, Improve Planning, and Support Greater Adoption of Its National Cybersecurity Protection System DHS is responsible for operating its National Cybersecurity Protection System (NCPS), operationally known as EINSTEIN. NCPS is intended to provide intrusion detection and prevention capabilities to entities across the federal government. It also is intended to provide DHS with capabilities to detect malicious traffic traversing federal agencies’ computer networks, prevent intrusions, and support data analytics and information sharing. In January 2016, we reported that the NCPS was partially, but not fully, meeting most of its stated four system objectives: Intrusion detection: We noted that NCPS provided DHS with a limited ability to detect potentially malicious activity entering and exiting computer networks at federal agencies. Specifically, NCPS compared network traffic to known patterns of malicious data, or “signatures,” but did not detect deviations from predefined baselines of normal network behavior. In addition, the system did not monitor several types of network traffic and its “signatures” did not address threats that exploited many common security vulnerabilities and, thus was not effective in detecting certain types of malicious traffic. Intrusion prevention: The capability of NCPS to prevent intrusions (e.g., blocking an e-mail determined to be malicious) was limited to the types of network traffic that it monitored. For example, the intrusion prevention function monitored and blocked e-mail. However, it did not address malicious content from other types of network traffic. Analytics: NCPS supports a variety of data analytical tools, including a centralized platform for aggregating data and a capability for analyzing the characteristics of malicious code. In addition, DHS had further enhancements to this capability planned through 2018. Information sharing: DHS had not developed most of the planned functionality for NCPS’s information-sharing capability, and requirements had only recently been approved. Moreover, we noted that agencies and DHS did not always agree about whether notifications of potentially malicious activity had been sent or received, and agencies had mixed views about the usefulness of these notifications. Further, DHS did not always solicit—and agencies did not always provide—feedback on the notifications. We recommended that DHS take nine actions to enhance NCPS’s capabilities for meeting its objectives, better define requirements for future capabilities, and develop network routing guidance. The department agreed with our recommendations; however, as of April 2018, it had not fully implemented 8 of the 9 recommendations. As part of a review mandated by the Federal Cybersecurity Enhancement Act of 2015, we are currently examining DHS’s efforts to improve its intrusion detection and prevention capabilities. DHS Needs to Continue to Advance CDM Program to Protect Federal Systems The Continuous Diagnostics and Mitigation (CDM) program was established to provide federal agencies with tools and services that have the intended capability to automate network monitoring, correlate and analyze security-related information, and enhance risk-based decision making at agency and government-wide levels. These tools include sensors that perform automated scans or searches for known cyber vulnerabilities, the results of which can feed into a dashboard that alerts network managers and enables the agency to allocate resources based on the risk. DHS, in partnership with, and through the General Services Administration, established a government-wide acquisition vehicle for acquiring CDM capabilities and tools. The CDM blanket purchase agreement is available to federal, state, local, and tribal government entities for acquiring these capabilities. There are three phases of CDM implementation and the dates for implementing Phase 2 and Phase 3 appear to be slipping: Phase 1: This phase involves deploying products to automate hardware and software asset management, configuration settings, and common vulnerability management capabilities. According to the Cybersecurity Strategy and Implementation Plan, DHS purchased Phase 1 tools and integration services for all participating agencies in fiscal year 2015. Phase 2: This phase intends to address privilege management and infrastructure integrity by allowing agencies to monitor users on their networks and to detect whether users are engaging in unauthorized activity. According to the Cybersecurity Strategy and Implementation Plan, DHS was to provide agencies with additional Phase 2 capabilities throughout fiscal year 2016, with the full suite of CDM phase 2 capabilities delivered by the end of that fiscal year. However, according to the Office of Management and Budget’s (OMB) FISMA Annual Report to Congress for Fiscal Year 2017, the CDM program began deploying Phase 2 tools and sensors during fiscal year 2017. Phase 3: According to DHS, this phase is intended to address boundary protection and event management throughout the security life cycle. It focuses on detecting unusual activity inside agency networks and alerting security personnel. The agency had planned to provide 97 percent of federal agencies the services they need for CDM Phase 3 in fiscal year 2017. However, according to OMB’s FISMA report for fiscal year 2017, the CDM program will continue to incorporate additional capabilities, including Phase 3, in fiscal year 2018. In May 2016, we reported that most of the 18 agencies covered by the CFO Act that had high-impact systems were in the early stages of implementing CDM. All 17 of the civilian agencies that we surveyed indicated they had developed their own strategy for information security continuous monitoring. Additionally, according to the survey responses, 14 of the 17 civilian agencies had deployed products to automate hardware and software asset configuration settings and common vulnerability management. Further, more than half of these agencies noted that they had leveraged products/tools provided through the General Services Administration’s acquisition vehicle. However, only 2 of the 17 agencies reported that they had completed installation of agency and bureau/component-level dashboards and monitored attributes of authorized users operating in their agency’s computing environment. Agencies noted that expediting the implementation of the CDM phases could be of benefit to them in further protecting their high-impact systems. Subsequently, in March 2017, we reported that the effective implementation of the CDM tools and capabilities can assist agencies in overcoming the challenges of securing their information systems and information. We noted that our audits often identify insecure configurations, unpatched or unsupported software, and other vulnerabilities in agency systems. Thus, the tools and capabilities available under the CDM program, when effectively used by agencies, can help them to diagnose and mitigate vulnerabilities to their systems. We reported that, by continuing to make these tools and capabilities available to federal agencies, DHS can also have additional assurance that agencies are better positioned to protect their information systems and information. Other DHS Services Are Available to Help Protect Systems but Are Not Always Used by Agencies Beyond the NCPS and CDM programs, DHS also provides a number of services that could help agencies protect their information systems. Such services include, but are not limited to: US-CERT monthly operational bulletins, which are intended to provide senior federal government information security officials and staff with actionable information to improve their organization’s cybersecurity posture based on incidents observed, reported, or acted on by DHS and US-CERT. CyberStat reviews, which are in-depth sessions attended by National Security Staff, as well as officials from OMB, DHS, and an agency to discuss that agency’s cybersecurity posture and opportunities for collaboration. According to OMB, these interviews are face-to-face, evidence-based meetings intended to ensure agencies are accountable for their cybersecurity posture. The sessions are intended to assist the agencies in developing focused strategies for improving their information security posture in areas where there are challenges. DHS Red and Blue Team exercises that are intended to provide services to agencies for testing their systems with regard to potential attacks. A Red Team emulates a potential adversary’s attack or exploitation capabilities against an agency’s cybersecurity posture. The Blue Team defends an agency’s information systems when the Red Team attacks, typically as part of an operational exercise conducted according to rules established and monitored by a neutral group. In May 2016, we reported that, although participation in these services varied among the 18 agencies we surveyed, most of those that chose to participate reported that they generally found these services to be useful in aiding the cybersecurity protection of their high-impact systems. Specifically, 15 of 18 agencies reported that they participated in US-CERT monthly operational bulletins, and most said they found the service very or somewhat useful. All 18 agencies reported that they participated in the CyberStat reviews, and most said they found the service very or somewhat useful. 9 of 18 agencies reported that they participated in DHS’ Red/Blue team exercises, and most said they found the exercises to be very or somewhat useful. Half of the 18 agencies in our survey reported that they wanted an expansion of federal initiatives and services to help protect their high- impact systems. For example, these agencies noted that expediting the implementation of CDM phases, sharing threat intelligence information, and sharing attack vectors, could be of benefit to them in further protecting their high-impact systems. We believe that by continuing to make these services available to agencies, DHS will be better able to assist agencies in strengthening the security of their information systems. DHS Has Issued Binding Operational Directives to Federal Agencies FISMA authorizes DHS to develop and issue binding operational directives to federal agencies and oversee their implementation by agencies. The directives are compulsory and require agencies to take specific actions that are intended to safeguard federal information and information systems from a known threat, vulnerability, or risk. In September 2017, we reported that DHS had developed and issued four binding operational directives as of July 2017, instructing agencies to: mitigate critical vulnerabilities discovered by DHS’s NCCIC through its scanning of agencies’ Internet-accessible systems; participate in risk and vulnerability assessments as well as DHS security architecture assessments conducted on agencies’ high-value assets; address several urgent vulnerabilities in network infrastructure devices identified in a NCCIC analysis report within 45 days of the directive’s issuance; and report cyber incidents and comply with annual FISMA reporting requirements. Since July 2017, DHS has issued two additional binding operational directives instructing agencies to: identify and remove the presence of any information security products developed by AO Kaspersky Lab on their information systems and discontinue the use of such products; and enhance e-mail by, among other things, removing certain insecure protocols, and ensure public facing web sites provide services through a secure connection. We plan to initiate work later this year to identify and assess DHS’s process for developing and overseeing agencies’ implementation of binding operational directives. DHS’s National Integration Center Generally Performs Required Functions but Needs to Evaluate Its Activities More Completely In February 2017, we reported that NCCIC had taken steps to perform each of its 11 statutorily required cybersecurity functions, such as being a federal civilian interface for sharing cybersecurity-related information with federal and nonfederal entities. NCCIC managed several programs that provided data used in developing 43 products and services that the center made available to its customers in the private-sector; federal, state, local, tribal and territorial government entities; and other partner organizations. For example, NCCIC issued indicator bulletins, which could contain information related to cyber threat indicators, defensive measures, and cybersecurity risks and incidents, and helped to fulfill its function to coordinate the sharing of such information across the government. Respondents to a survey that we administered to NCCIC’s customers varied in their reported use of NCCIC’s products but had generally favorable views of the center’s activities. The National Cybersecurity Protection Act also required NCCIC to carry out its functions in accordance with nine implementing principles, to the extent practicable. However, as we reported, the extent to which NCCIC adhered to the 9 principles when performing the functions was unclear because the center had not yet determined the applicability of the principles to all 11 functions. It also had not established metrics and methods by which to evaluate its performance against the principles. We also identified several impediments to NCCIC performing its cybersecurity functions more efficiently. For example, the center did not have a centralized system for tracking security incidents and, as a result, could not produce a report on the status of all incidents reported to the center. In addition, the center did not keep current and reliable customer information and was unable to demonstrate that it had contact information for all owners and operators of the most critical cyber-dependent infrastructure assets. We made nine recommendations to DHS for enhancing the effectiveness and efficiency of NCCIC. Among other activities, these recommendations called for the department to determine the applicability of the implementing principles and establish metrics and methods for evaluating performance; and address identified impediments. DHS agreed with the recommendations; however, as of April 2018, all nine recommendations remained unimplemented. Additional Actions by DHS Are Needed for Promoting and Assessing Private- Sector Adoption of the Cybersecurity Framework An executive order issued by the President in February 2013 (E.O. 13636) states that sector-specific agencies (SSA), which include DHS, are to review the National Institute of Standards and Technology Framework for Improving Critical Infrastructure Cybersecurity (cybersecurity framework) and, if necessary, develop implementation guidance or supplemental materials to address sector-specific risks and operating environments. In February 2014, DHS launched the Critical Infrastructure Cyber Community Voluntary Program to assist the enhancement of critical infrastructure cybersecurity and to encourage adoption of the framework across the critical infrastructure sectors. In addition, DHS, as the SSA and co-SSA for 10 critical infrastructure sectors, had developed framework implementation guidance for some of the sectors it leads. Nevertheless, we reported weaknesses in DHS’s efforts to promote the use of the framework across the sectors and within the sectors it leads. Specifically, in December 2015, we reported that DHS did not measure the effectiveness of cyber community voluntary program to encourage use of the Cybersecurity Framework. In addition, DHS and GSA, which are the co-SSAs for the government facilities sector, had yet to determine if sector implementation guidance should be developed for the government facilities sector. Further, in February 2018, we reported that none of the SSAs, to include DHS, had measured the cybersecurity framework’s implementation by entities within their respective sectors, in accordance with the nation’s plan for national critical infrastructure protection efforts. We made two recommendations to DHS to better facilitate adoption of the Cybersecurity Framework across the critical infrastructure sectors and within the government facilities sector. We also recommended that DHS develop methods for determining the level and type of framework adoption by entities across their respective sectors. DHS concurred with the three recommendations. As of April 2018, only the recommendation related to the government facilities sector has been implemented. DHS Needs to Better Measure Effectiveness of Cyber Risk Mitigation Activities with Critical Infrastructure Sector Partners Presidential Policy Directive-21 issued by the President in February 2013, states that SSAs are to collaborate with critical infrastructure owners and operators to strengthen the security and resiliency of the nation’s critical infrastructure. In November 2015, we reported that the SSAs, including DHS, generally used multiple public-private mechanisms to facilitate the sharing of cybersecurity related information. For example, DHS used coordinating councils and working groups of federal and nonfederal stakeholders to facilitate coordination with each other. In addition, the department’s NCCIC received and disseminated cyber-related information for public and private-sector partners. Nevertheless, we identified deficiencies in critical infrastructure partners’ efforts to collaborate to monitor progress towards improving cybersecurity within the sectors. Specifically, the SSAs for 12 sectors, including DHS for 8 sectors, had not developed metrics to measure and report on the effectiveness of their cyber risk mitigation activities or their sectors’ cybersecurity posture. This was because, among other reasons, the SSAs rely on their private-sector partners to voluntarily share information needed to measure efforts. We made two recommendations to DHS—one recommendation based on its role as the SSA for 8 sectors and one recommendation based on its role as the co-SSA for 1 sector—to collaborate with sector partners to develop performance metrics and determine how to overcome challenges to reporting the results of their cyber risk mitigation activities. DHS concurred with the two recommendations. As of April 2018, DHS has not demonstrated that it has implemented these recommendations. DHS has taken Steps to Identify its Workforce Gaps; However, It Urgently Needs to Take Actions to Identify Its Position and Critical Skill Requirements In February 2018, we reported that DHS had taken actions to identify, categorize, and assign employment codes to its cybersecurity positions, as required by the Homeland Security Cybersecurity Workforce Assessment Act of 2014. However, its actions had not been timely and complete. For example, DHS had not met statutorily defined deadlines for completing actions to identify and assign codes to cybersecurity positions or ensured that its procedures to identify, categorize, and code its cybersecurity positions addressed vacant positions, as required by the act. The department also had not (1) identified the individual within each DHS component agency who was responsible for leading and overseeing the identification and coding of the component’s cybersecurity positions or (2) reviewed the components’ procedures for consistency with departmental guidance. In addition, DHS had not yet completed its efforts to identify all of the department’s cybersecurity positions and accurately assign codes to all filled and vacant cybersecurity positions. In August 2017, DHS reported to the Congress that it had coded 95 percent of the department’s identified cybersecurity positions. However, we determined that the department had, at that time, coded approximately 79 percent of the positions. DHS overstated the percentage of coded positions primarily because it excluded vacant positions, even though the act required the department to report such positions. Further, although DHS had taken steps to identify its workforce capability gaps, it had not identified or reported to the Congress on its department- wide cybersecurity critical needs that align with specialty areas. The department also had not annually reported its cybersecurity critical needs to the Office of Personnel Management (OPM), as required; and it had not developed plans with clearly defined time frames for doing so. We recommended that DHS take six actions, including ensuring that its cybersecurity workforce procedures identify position vacancies and responsibilities; reported workforce data are complete and accurate; and plans for reporting on critical needs are developed. DHS concurred with the six recommendations and stated that it plans to take actions to address them by June 2018. In conclusion, DHS is unique among federal civilian agencies in that it is responsible for improving and promoting the cybersecurity of not only its own internal computer systems and networks but also those of other federal agencies and the private-sector owners and operators of critical infrastructure. Consistent with its statutory authorities and responsibilities under federal policy, the department has acted to assist federal agencies and private-sector partners in bolstering their cybersecurity capabilities. However, the effectiveness of DHS’s activities has been limited or not clearly understood because of shortcomings with its programs and a lack of useful performance measures. DHS needs to enhance its capabilities; expedite delivery of services; continue to provide guidance and assistance to federal agencies and private-sector partners; and establish useful performance metrics to assess the effectiveness of its cybersecurity-related activities. In addition, developing and maintaining a qualified cybersecurity workforce needs to be a priority for the department. Until it fully and effectively performs its cybersecurity authorities and responsibilities, DHS’s ability to improve and promote the cybersecurity of federal and private-sector networks will be limited. Chairman Johnson, Ranking Member McCaskill, and Members of the Committee, this concludes my statement. I would be pleased to respond to your questions. GAO Contacts and Staff Acknowledgments If you or your staffs have any questions about this testimony, please contact Gregory C. Wilshusen at (202) 512-6244 or wilshuseng@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Nabajyoti Barkakati, Chris Currie, Larry Crosland, Tammi Kalugdan, David Plocher, Di’Mond Spencer, and Priscilla Smith. Related GAO Products GAO, Critical Infrastructure Protection: Additional Actions Are Essential for Assessing Cybersecurity Framework Adoption, GAO-18-211 (Washington, D.C.: Feb. 15, 2018). GAO, Cybersecurity Workforce: Urgent Need for DHS to Take Actions to Identify Its Position and Critical Skill Requirements, GAO-18-175 (Washington, D.C.: Feb. 6, 2018). GAO, Federal Information Security: Weaknesses Continue to Indicate Need for Effective Implementation of Policies and Practices, GAO-17-549 (Washington, D.C.: Sept. 28, 2017). GAO, Cybersecurity: Federal Efforts Are Under Way That May Address Workforce Challenges, GAO-17-533T (Washington, D.C.: Apr. 4, 2017). GAO, Information Security: DHS Needs to Continue to Advance Initiatives to Protect Federal Systems, GAO-17-518T (Washington, D.C.: Mar. 28, 2017). GAO, High-Risk Series: Progress on Many High-Risk Areas, While Substantial Efforts Needed on Others, GAO-17-317 (Washington, D.C.: Feb. 15, 2017). GAO, Cybersecurity: Actions Needed to Strengthen U.S. Capabilities, GAO-17-440T (Washington, D.C.: Feb. 14, 2017). GAO, Cybersecurity: DHS’s National Integration Center Generally Performs Required Functions but Needs to Evaluate Its Activities More Completely, GAO-17-163 (Washington, D.C.: Feb. 1, 2017). GAO, Information Security: DHS Needs to Enhance Capabilities, Improve Planning, and Support Greater Adoption of Its National Cybersecurity Protection System, GAO-16-294 (Washington, D.C.: Jan. 28, 2016). GAO, Critical Infrastructure Protection: Measures Needed to Assess Agencies’ Promotion of the Cybersecurity Framework, GAO-16-152 (Washington, D.C.: Dec. 17, 2015). GAO, Critical Infrastructure Protection: Sector-Specific Agencies Need to Better Measure Cybersecurity Progress, GAO-16-79 (Washington, D.C.: Nov. 19, 2015). This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study The emergence of increasingly sophisticated threats and continuous reporting of cyber incidents underscores the continuing and urgent need for effective information security. GAO first designated information security as a government-wide high- risk area in 1997. GAO expanded the high-risk area to include the protection of cyber critical infrastructure in 2003 and protecting the privacy of personally identifiable information in 2015. Federal law and policy provide DHS with broad authorities to improve and promote cybersecurity. DHS plays a key role in strengthening the cybersecurity posture of the federal government and promoting cybersecurity of systems supporting the nation's critical infrastructures. This statement highlights GAO's work related to federal programs implemented by DHS that are intended to improve federal cybersecurity and cybersecurity over systems supporting critical infrastructure. In preparing this statement, GAO relied on a body of work issued since fiscal year 2016 that highlighted, among other programs, DHS's NCPS, national integration center activities, and cybersecurity workforce assessment efforts. What GAO Found In recent years, the Department of Homeland Security (DHS) has acted to improve and promote the cybersecurity of federal and private-sector computer systems and networks, but further improvements are needed. Specifically, consistent with its statutory authorities, DHS has made important progress in implementing programs and activities that are intended to mitigate cybersecurity risks on the computer systems and networks supporting federal operations and our nation's critical infrastructure. For example, the department has: issued cybersecurity related binding operational directives to federal agencies; served as the federal-civilian interface for sharing cybersecurity related information with federal and nonfederal entities; Framework for Improving Critical Infrastructure Cybersecurity ; and Nevertheless, the department has not taken sufficient actions to ensure that it successfully mitigates cybersecurity risks on federal and private-sector computer systems and networks. For example, GAO reported in 2016 that DHS's National Cybersecurity Protection System (NCPS) had only partially met its stated system objectives of detecting and preventing intrusions, analyzing malicious content, and sharing information. GAO recommended that DHS enhance capabilities, improve planning, and support greater adoption of NCPS. In addition, although the department's National Cybersecurity and Communications Integration Center generally performed required functions such as collecting and sharing cybersecurity related information with federal and non-federal entities, GAO reported in 2017 that the center needed to evaluate its activities more completely. For example, the extent to which the center had performed its required functions in accordance with statutorily defined implementing principles was unclear, in part, because the center had not established metrics and methods by which to evaluate its performance against the principles. Further, in its role as the lead federal agency for collaborating with eight critical infrastructure sectors including the communications and dams sectors, DHS had not developed metrics to measure and report on the effectiveness of its cyber risk mitigation activities or on the cybersecurity posture of the eight sectors. GAO reported in 2018 that DHS had taken steps to assess its cybersecurity workforce; however, it had not identified all of its cybersecurity positions and critical skill requirements. Until DHS fully and effectively implements its cybersecurity authorities and responsibilities, the department's ability to improve and promote the cybersecurity of federal and private-sector networks will be limited. What GAO Recommends Since fiscal year 2016, GAO has made 29 recommendations to DHS to enhance the capabilities of NCPS, establish metrics and methods for evaluating performance, and fully assess its cybersecurity workforce, among other things. As of April 2018, DHS had not demonstrated that it had fully implemented most of the recommendations.
gao_GAO-18-22
gao_GAO-18-22_0
Background Statutory and Executive requirements assert broad principles and require agencies to consider alternative ways of regulating and preferred regulatory designs, such as performance standards rather than means- based design standards. Further, these requirements and directives urge agencies to consider alternative approaches to eliciting compliance, such as alternative reporting methods or delaying compliance dates. The Regulatory Flexibility Act (RFA) requires federal agencies to examine the impact of proposed, final, and existing rules on small businesses, small organizations, and small governmental jurisdictions, and to solicit the ideas and comments of such entities for this purpose. Among other requirements, the RFA requires that agencies consider regulatory alternatives that accomplish the stated objectives of a proposed rule while minimizing any significant impact on small entities. However, the RFA does not mandate any particular outcome in rulemaking. Executive Order 12866 (E.O. 12866), issued in 1993, promotes a regulatory philosophy and set of principles that, to the extent permitted by law and where applicable, encourages agencies to assess costs and benefits of their proposed and final regulations. It also directs agencies to consider available regulatory alternatives in all regulations, including the alternative of not regulating, and generally select those alternatives that maximize net benefits, to the extent permitted by statute. Alternatives to direct regulation include providing economic incentives to encourage the desired behavior (such as user fees or marketable permits) or providing information upon which choices can be made by the public. If an agency determines that direct regulation is necessary, the Executive Order directs the agency, to the extent feasible, to specify performance objectives, rather than specifying the behavior or manner of compliance that regulated entities must adopt. Subsequent executive orders across administrations have reaffirmed this philosophy and these principles. Circular A-4, issued by OMB in 2003, provides guidance and best practices to federal agencies for determining the potential effects of new regulations. A-4 directs agencies to consider a number of regulatory alternatives, including market-oriented approaches rather than direct controls, performance standards rather than design standards, informational measures, and different compliance dates and enforcement methods, among others. The RFA, specific statutes, and multiple executive orders have also emphasized the importance of regulatory lookbacks, also referred to as retrospective reviews, in which agencies evaluate how existing regulations work in practice: Statutory requirements: The RFA’s Section 610 requires agencies to review all regulations that have or will have a significant impact on small entities within 10 years of the publication of the rule to determine whether such rules should be continued without change, or should be amended or rescinded, consistent with the stated objectives of applicable statutes, to minimize impacts on small entities. Congress also established other requirements for agencies to review the effects of regulations issued under specific statutes, such as the Clean Air Act. Executive Order 13771, issued in January 2017, requires executive agencies to identify at least two existing regulations to be repealed whenever they publicly propose or otherwise promulgate a new regulation, unless prohibited by law. Agencies must also annually provide their best approximation of the total costs or savings associated with each new regulation or repealed regulation to OMB. Finally, the order requires that the total incremental cost of all new regulations, including the savings for regulations that have been repealed, be no greater than zero for fiscal year 2017, unless otherwise required by law or consistent with advice provided in writing by the OMB Director. Executive Order 13777, issued in February 2017, requires agencies to designate an agency official as its Regulatory Reform Officer. Regulatory Reform Officers oversee the implementation of regulatory reform initiatives to ensure that agencies effectively carry out regulatory reforms, consistent with applicable law. Agencies must also establish Regulatory Reform Task Forces to evaluate existing regulations and make recommendations regarding their repeal, replacement, or modification, consistent with applicable law. Selected Agencies Reported Using Statutory and Executive Requirements and Regulatory Objectives in Their Decision- Making Processes Agencies Have Multiple Regulatory Design Options Available to Achieve Their Objectives Depending on Statutory Discretion When agencies determine that they may need to regulate, they generally have multiple regulatory designs available to achieve their objectives. Agencies are directed by statute and Executive requirements to assess alternatives to regulatory action—including not issuing new regulations— and different ways of regulating. Available regulatory designs range from prescriptive regulations that specify the adoption of a certain technology or action to designs that generally provide regulated entities with more discretion and options for compliance, and in some instances hybrid designs that incorporate both prescriptive and less prescriptive elements. Alternatives to prescriptive regulations provide regulated entities with greater flexibility. For example, performance-based regulations require a certain outcome but allow regulated entities discretion to determine how they will achieve that outcome, while market-based regulations use tradeable permits or fees to influence behavior. Table 2 highlights the regulatory designs identified through our literature review and corroborated by subject matter specialists and agency officials. The table includes selected examples of applicable regulations implemented by our case study agency subcomponents. Statutes give agencies varying degrees of discretion to consider multiple designs as they develop regulations to meet their objectives. In some instances, Congress directs agencies by statute to implement specific regulatory designs. For example, the Occupational Safety and Health Act directs the Occupational Safety and Health Administration (OSHA), when promulgating a standard, to either (1) adopt existing scientific and industry consensus standards for workplace health and safety, or (2) explain why the standard adopted by the agency better protects workers than the national consensus standard. In addition, requirements dealing with exposures to toxic materials must be formulated in the terms of “objective criteria and the performance desired” whenever practicable. The Clean Air Act provides EPA’s Office of Air and Radiation (OAR) with varying degrees of discretion to consider different regulatory designs when developing its regulatory programs. For example, the Clean Air Act gave the office broad authority to establish a tradable emissions allowance system—commonly referred to as cap and trade—with a market-based design for its Acid Rain Program, but to promulgate specific prescriptive regulations for the National Emission Standards for Hazardous Pollutants program. Selected Agencies Stated a Preference for Less Prescriptive Designs to Achieve Regulatory Objectives Officials at selected agencies reported a general preference for less prescriptive regulations in accordance with E.O. 12866, Circular A-4, and other Executive requirements, which encourage agencies to consider less prescriptive regulatory design options for achieving their objectives. For example, DOT officials told us that, when choosing among regulatory design options, they prefer performance-based regulations over means- based regulations. Officials from DOT’s Pipeline and Hazardous Materials Safety Administration (PHMSA) told us that performance-based regulations—as implemented for classifying and packaging hazardous material—allow them to accommodate innovations among regulated entities, adapt to technological advances, and promote the competitiveness of U.S. firms in global markets without having to subsequently revise the regulations. The following examples illustrate how some selected subcomponents have (1) encouraged the development of less prescriptive design options for new regulatory programs, and (2) updated or replaced existing regulations to incorporate more flexible designs. Developing trainings to encourage less prescriptive designs: Two selected subcomponents produced training materials to promote the consideration of all options for designing effective regulation, including less prescriptive regulations where appropriate. EPA’s Office of Enforcement and Compliance Assurance developed a workbook and supplemental training course that present principles and tools to help rule drafters consider the relative effectiveness of different designs for achieving regulatory objectives, including how the degree of prescriptiveness can either promote or hinder compliance. The Federal Aviation Administration’s (FAA) “Performance-Based Regulations Training” course uses real world examples and team exercises to teach rule drafters (1) the concepts that inform performance-based designs, (2) the relationship between prescriptive and less prescriptive regulatory approaches, and (3) considerations for developing and assessing performance-based regulations. Updating or replacing existing regulations to incorporate flexible designs: FAA’s 2016 airworthiness standards for small airplanes replaced some prescriptive design requirements with more flexible performance-based standards. Agency officials told us that they expect the new regulation will improve safety and cost-effectiveness (such as by reducing compliance costs) while facilitating future technological innovations. Animal and Plant Health Inspection Service (APHIS) officials told us that increased international demand for cattle exports put pressure on their inspection infrastructure and prompted them to replace their formerly prescriptive standards with performance-based regulations that officials described as more flexible and easier to adapt to changing circumstances. Food Safety and Inspection Service (FSIS) officials told us that their Hazardous Analysis and Critical Control Points (HACCP) Rule represented a shift from FSIS’s traditional means-based regulations (which mandated specific food production standards) to a mixed performance- and management-based regulatory program (which monitors food safety plans and production outcomes). Agencies Reported that Regulatory Objectives May Require Prescriptive Designs or Use of Multiple Designs Despite a general preference for less prescriptive designs among selected agencies, officials from nine selected subcomponents told us that their regulatory objectives sometimes required a prescriptive regulation or that in some instances regulated entities expressed a preference for prescriptiveness. Mine Safety and Health Administration (MSHA) officials told us that their regulations were often necessarily prescriptive to implement and enforce the mine health and safety standards required by statute. For example, based on data from the National Institute for Occupational Safety and Health, MSHA determined that requiring more frequent respirable dust sampling for mining occupations known to have high dust levels and requiring the use of certain monitoring devices to measure respirable coal dust exposure are necessary to limit exposure to respirable coal mine dust and thus reduce occupational lung diseases. Bureau of Industry and Security (BIS) officials told us that their export licensing regulations are necessarily prescriptive to narrowly target specific items as unacceptable for export due to national security or commercial sanctions against certain countries. Food and Drug Administration (FDA) officials told us that, while they try to achieve a balance between prescriptive and less prescriptive regulatory designs, in some instances prescriptive regulations are the only means of ensuring public health and safety. Officials from EPA’s Office of Chemical Safety and Pollution Prevention (OCSPP) told us that, when given non-prescriptive regulatory options, small businesses generally prefer prescriptive regulations with clear compliance requirements to minimize uncertainty. An EPA OAR official told us that, during the update of a recent regulation on refrigerants, the agency considered including a provision allowing operators of pollutant-emitting facilities the option to either (1) set a corporate-wide budget for leaks covering all facilities, or (2) comply with a prescriptive regulation for individual appliances susceptible to leakage. Based on feedback from regulated entities and EPA enforcement officials, who voiced a need for predictability and ease of monitoring, EPA officials said that they ultimately chose to promulgate the more prescriptive regulation instead of the more flexible, but challenging to implement, corporate-wide approach. Ten selected subcomponents incorporated multiple design elements into their regulations—what we refer to as hybrid designs—that offer more flexibility or, conversely, more clarity to meet the needs of different regulated entities. PHMSA officials told us that their special permits programs for hazardous materials and pipelines allow regulated entities the flexibility to determine their own means of satisfying transportation safety requirements if they achieve the same level of safety prescribed by regulation. FAA officials told us that most of their safety standards are necessarily prescriptive to ensure clarity and uniformity. However, they said that they often encourage the use of multiple designs in their rulemakings that allow for both performance-based and means-based regulations—as with the 2016 airworthiness standards for small airplanes. OSHA officials told us that they provide employers with multiple options for achieving regulatory compliance that incorporate both prescriptive and less prescriptive design elements. For example, OSHA’s health standards regulating crystalline silica exposure among construction site workers provides employers both a performance- based option (which allows regulated entities discretion in determining how to meet permissible exposure limits), and a means-based option (in which regulated entities implement specified exposure mitigation measures for designated tasks). FDA and FSIS have both implemented voluntary programs to promote the adoption of practices among regulated entities that align with the agencies’ regulatory objectives. FSIS encourages regulated food facilities to develop voluntary food defense plans as a means of mitigating potential health hazards and strengthening food safety. FDA officials told us they issued voluntary food labeling standards for raw fruits and vegetables to assist in establishing an industry standard, and achieved 80 percent compliance among regulated entities. Selected Agency Processes Included Practices for Considering and Assessing Regulatory Design Options All selected agencies told us their processes for drafting regulations incorporated internal discussions to consider available regulatory design options. For example, Employee Benefits Security Administration (EBSA) officials told us that the agency’s process encourages rule drafters to solicit input from internal and external stakeholders to inform the consideration of all possible regulatory design options available to achieve statutory objectives. BIS officials told us that proposals for broadly applicable regulations—including available design options—are discussed and vetted with multiple stakeholders, including (1) BIS subcomponent officials, (2) Office of General Counsel staff, (3) agency engineers, and (4) external technical advisory committees. However, some selected subcomponents’ processes for drafting proposed regulations also included documentation of identified design options for achieving objectives and assessments of risk or enforcement and compliance implications of identified design options. These practices for identifying and assessing regulatory designs are described in the following examples. Documenting the assessment of design options for achieving regulatory objectives: EPA uses an Analytical Blueprint to identify the range of regulatory design options considered throughout the Action Development Process (ADP)—the agency’s process for developing and responding to public comments on new regulatory proposals. FSIS officials told us that rule drafters develop an “options paper” to identify and assess alternative approaches to achieving regulatory objectives based on multiple inputs, including (1) data analyses, (2) subject matter expertise, and (3) stakeholder feedback. FAA officials told us that rule-drafting groups discuss regulatory design options when developing a Rulemaking Action Plan and present these alternatives in briefing documents to the principal agency managers, referred to as “principals briefs.” FDA officials told us that rule-drafting groups generally develop a concept paper or other summary document to determine the optimal means of achieving a regulatory goal, including considerations of multiple design options. Assessing the risk associated with identified regulatory design options: Three selected subcomponents incorporated assessments of risk into their rule-drafting procedures. DOT’s Rulemaking Requirements direct agency officials to “consider, to the extent reasonable, the degree and nature of the risks posed [by agency action]” and “how the agency action will reduce risks to public health, safety, and the environment” per Executive Order 12866. EPA’s ADP specifies that Analytic Blueprints identify, assess, and discuss the risk management implications of proposed regulatory design options. USDA’s Regulatory Decisionmaking Requirements direct rule drafters to conduct a comparison of risks for regulatory design options and provide a description of the level of uncertainty and unknowns associated with each design. Assessing the enforcement and compliance implications of identified regulatory design options: An official from FSIS told us that representatives from its Office of Field Operations or Office of Investigation, Enforcement, and Audit often participate in rule-drafting groups to provide an enforcement perspective. A BIS official told us that rule drafters solicit informal feedback from enforcement officials to ensure the practicability of regulatory standards during both the development of prospective regulations and the initial implementation of new regulations. EPA’s procedures require that enforcement officials participate in EPA’s ADP rule-drafting groups for rules involving “precedent-setting policy implications” and “extensive cross-agency participation,” and EPA officials told us that enforcement officials also are often involved in the drafting of other rules. Further, EPA Office of Enforcement and Compliance Assistance’s training and guidance materials encourage rule drafters to incorporate compliance principles—such as clarity, consistency, and transparency—into their decision making and consider how regulatory design choices can influence later compliance and need for enforcement. Considering compliance and enforcement implications while making regulatory design decisions is important because agency officials stated that different design choices have implications for future compliance and enforcement resources. For example, PHMSA officials told us they create an implementation plan for any proposed regulation with an expected impact on enforcement resources. Officials from OSHA and EPA Office of Land and Emergency Management (OLEM) told us that management-based regulations— such as OSHA’s Process Safety Management requirements for oil refineries and chemical facilities and OLEM’s Risk Management Program for facilities that use hazardous chemical substances—can be resource-intensive to enforce because of the greater technical expertise needed to review highly individual and technical plans among heterogeneous regulated entities to ensure compliance. An EPA OAR official told us that the design of its cap-and-trade system— tradeable allowances that require regulated entities to monitor and report their emissions to EPA—limits the need for enforcement resources to only those entities that do not comply with monitoring, reporting, and allowance-holding requirements. Selected Agencies Reported Using Multiple Tools and Approaches for Allocating Resources to Elicit Compliance To Elicit Compliance, Agencies Generally Have Flexibility to Use a Mix of Available Tools When regulations are promulgated, agency officials must determine how they will promote compliance with their regulations and deter noncompliance. Agencies generally have the flexibility to tailor their compliance and enforcement strategies to encourage voluntary compliance and inform regulated entities of regulatory requirements. Agency officials decide on the appropriate mix of compliance assistance together with monitoring and enforcement efforts to achieve regulatory outcomes. Based on our review of relevant academic literature, there are multiple tools available to agencies to elicit compliance, although agencies traditionally use two tools to achieve their objectives. The first, compliance assistance, helps regulated entities understand and meet regulatory requirements. For example, an agency may consider providing assistance through educational materials and outreach to promote compliance among regulated entities. The second, the use of monitoring, enforcement, and data reporting, ensures that regulations are followed and deters noncompliance. Agencies may also supplement these traditional approaches with options that provide more accommodating and flexible opportunities to promote compliance among regulated entities, such as developing cooperative programs or providing onsite consultation services. Table 3 identifies some of the options by which agency officials may accomplish their regulatory goals. As described in table 3, agencies use compliance assistance tools, such as education and consultation, to ensure that regulated entities understand regulatory requirements and provide examples of how to comply. One way that agencies do this is by providing regulatory guidance to regulated entities in the forms of Frequently Asked Questions, tools, or factsheets. We reported in 2015 that agencies used a wide variety of guidance to interpret new regulations and clarify policies in response to questions or compliance findings. However, we have also recommended that selected agencies could further help regulated entities comply, and agencies have implemented those recommendations by offering further clarifications and guidance. The selected subcomponents that we reviewed employed a variety of compliance assistance activities. For example: FSIS provides compliance guidance and makes training materials available to its regulated entities, such as meat, poultry, and egg product plants, and maintains help desks to provide technical assistance to its regulated community. BIS holds domestic and international seminars, provides online and in-person trainings, responds to inquiries submitted online, issues industry advisory opinions, and works with other federal agencies to provide immediate error alerts to filers using their Automated Export System. FDA provides web-based, in-person, and telephone education and outreach; hosts webinars, public meetings, and stakeholder meetings; and posts training videos and blogs. For example, the agency established a central source of information for questions related to its 2011 Food Safety Modernization Act rules, programs, and implementation strategies. Regulatory agencies also engage in enforcement activities such as inspections, monitoring reported data, and issuing fines when noncompliance is identified. The selected agencies we reviewed reported using criteria such as data, compliance history, and trends in noncompliance to identify risks and more efficiently target enforcement activities. For example: OSHA conducts two types of inspections—“un-programmed” and “programmed”—to target resources for the 8 million workplaces it regulates. Un-programmed inspections respond to specific complaints or injuries, while programmed inspections target resources towards specific high-risk industries and employers. FSIS officials analyze noncompliance trends for its food safety process control regulations at meat, poultry, and egg processing facilities and send inspection officials “early warning” alerts when the establishments they inspect reach certain noncompliance rates. APHIS’s Animal Care program uses its Risk Based Inspection System to conduct more frequent and in-depth inspections at facilities with a higher risk of animal welfare concerns, and fewer at those that are consistently compliant. The system uses criteria, such as past compliance history and the seriousness of documented noncompliance, to determine minimum inspection frequencies for licensed and registered facilities. The selected agencies also reported supplementing traditional compliance assistance and enforcement approaches with other tools, including: Cooperative programs: OSHA uses multiple cooperative programs to recognize employers who have introduced health and safety initiatives at their worksites that exceed requirements. OSHA’s Voluntary Protection Program rewards employers that exceed worker safety requirements through an exemption from routine inspections while they maintain their status in the program. Participating employers are reevaluated every 3 to 5 years. OSHA uses its Challenge Program to partner successful employers as mentors for employers who are attempting to improve their safety and health programs. The Centers for Medicare and Medicaid Services’ (CMS) Skilled Nursing Home Facilities Value Based Purchasing Program is authorized to use incentive payments to recognize nursing homes that exceed minimum standards of quality. Onsite consultation services: OSHA works with state governments to provide onsite consultation services to small- and medium-sized businesses. These consultations assist employers to identify potential hazards and improve their injury and illness prevention programs. MSHA offers compliance assistance and outreach through “walk and talks” during which MSHA inspectors and education outreach staff provide mine operators and miners with information on hazardous tasks and conditions, as well as offer best practices to prevent accidents, injuries, and fatalities. Voluntary disclosures: FAA implements a number of voluntary reporting programs. For example, its Flight Operational Quality Assurance program allows commercial airlines and their employees to anonymously report incident information. The agency then uses this information to monitor trends and target resources. BIS encourages parties who believe they may have violated its export regulation to self-disclose. Officials then review the disclosure to determine if a violation has occurred and to identify the appropriate corrective action. BIS views a self-disclosure as an indicator of a party’s intent to comply with its requirements. EBSA’s Voluntary Fiduciary Correction Program and Delinquent Filer Voluntary Correction Program encourage voluntary compliance by allowing plans and plan fiduciaries to self-correct certain violations and by offering relief from higher civil penalty assessments. Third-party certification: EPA OCSPP’s formaldehyde emissions rules require foreign and domestic wood mills to receive a third party certification that certain wood products meet defined standards. EPA must approve the third parties that certify the products. Selected Agencies Reported Considering Multiple Factors and Take Different Approaches to Allocating Resources to a Mix of Compliance and Enforcement Tools Agencies generally have flexibility in making decisions on and allocating resources for a mix of compliance assistance and enforcement strategies. However, some selected agencies reported that statutory requirements, programmatic constraints, and changing priorities affected how they allocated resources for compliance and enforcement activities. For example: MSHA must prioritize available resources to fund inspections because they are required by law to inspect every underground mine four times a year and every surface mine twice each year. Once those resources have been allocated for inspection, any additional resources may then be used for compliance related activities. FSIS’ allocation of resources is similarly constrained because it is statutorily required to be present at every meat, poultry, and egg product facility whose product enters into commerce in order for the facility to operate. APHIS is programmatically constrained in allocating resources between enforcement and compliance assistance because another federal department enforces some of their promulgated regulations, and thus determines compliance resources and approaches. The agency’s Agricultural Quarantine Inspection program inspection activities are performed by Customs and Border Protection within the Department of Homeland Security. The type and behavior of regulated entities also affects selected agency decisions on strategies to achieve compliance. The characteristics of regulated entities—such as the hetero- and homogeneity of the regulated community and frequency of interaction with agency officials—may inform agency compliance assistance and enforcement resource decisions. Some of the selected agencies described frequent interaction with regulated entities that were homogeneous or easily identified. As a result, officials said it is easier for their agencies to ensure that regulated entities are aware of applicable requirements, and that there may be less need to invest in compliance assistance. For example, the operators of the pipelines PHMSA regulates are a small and well known community. Similarly, FSIS inspectors must be present at each meat, poultry, or egg products facility, at frequencies determined by the type of operation being conducted, for it to function. MSHA inspects a fixed number of mines, and its inspectors are often onsite; however, MSHA officials stated that some mines are better at complying with health and safety standards than other mines. In contrast, large and heterogeneous communities present different needs and considerations that may inform agencies’ compliance assistance and enforcement resource decisions. When regulated entities are less likely to engage with inspectors or other federal officials, agencies’ decisions on allocating resources to ensure all regulated entities understand requirements and to elicit voluntary compliance are important. As previously discussed, OSHA regulates and monitors a large and diverse community of regulated entities. EBSA monitors approximately 685,000 private retirement plans and 2.2 million health plans, and similar numbers of other welfare benefit plans. CMS regulates more than 15,000 large and small nursing home facilities across the country. In contrast to its pipeline-related regulations, PHMSA also regulates a broad spectrum of transportation operators and hazardous materials, requiring a different approach to disseminating information and providing outreach. At the selected agencies we reviewed, agency officials told us that the main objective of their regulatory enforcement efforts is to achieve compliance with regulatory requirements. The selected agencies we reviewed took different approaches to achieve compliance, and used compliance and enforcement tools to escalate pressure to get regulated entities to comply. For example, FDA officials told us that when the agency identifies noncompliance, it may not immediately sanction a regulated entity. Rather, the agency may begin with a meeting or call with the regulated entity to address the noncompliance, and gradually implement more serious regulatory compliance measures (such as a negative inspection report or warning letter) or even seek an injunction from the relevant court(s) if it cannot resolve the noncompliance. APHIS also uses a range of compliance assistance activities to promote compliance and reserves its enforcement authority for the most serious situations and noncompliance. For example, APHIS officials told us it offers facilities struggling to maintain compliance the opportunity to work with trained compliance specialists to develop options and plans to promote future compliance. PHMSA officials told us the agency uses the Systems Integrity Safety Program as a non-adversarial tool that provides compliance assistance to regulated entities not currently in compliance. They said that the agency generally will not initiate enforcement actions against regulated entities enrolled in this program, but will pursue them if there are violations that PHMSA believes to be willful, and where a safety violation presents an imminent hazard. Despite a common objective to elicit compliance, selected agency approaches to resource allocations for compliance and enforcement differ. While some agencies consider allocations for compliance and enforcement to implement each individual regulation, others allocate resources across regulations and regulatory programs. For example, Labor allocates compliance assistance and enforcement resources for individual regulations depending on multiple factors, such as the nature of the regulation and underlying subject matter. In contrast, EPA allocates resources across regulations, programs, and regions. Its Office of Enforcement and Compliance Assurance works with each regional office to allocate enforcement and compliance assistance resources for the various programs across EPA. In addition, certain agencies we reviewed distinguish between compliance assistance and enforcement activities, while others view these activities as a joint effort. For example, EBSA allocates its resources between benefits advisors, who provide compliance assistance, and their enforcement staff. Conversely, OSHA inspectors provide compliance assistance to regulated entities in addition to their enforcement roles, supplementing onsite outreach and education provided by compliance assistance specialists located in regional offices. To appropriately allocate their enforcement and compliance resources, selected agencies we reviewed also collect and review data to identify noncompliance trends. For example: OSHA uses collected data to identify national and local special emphasis programs to highlight specific workplace health and safety issues as the focus of targeted outreach and enforcement efforts. EBSA’s national office annually establishes enforcement priorities— and shifts resources to respond with new emphases—through its guidance outlined in its Enforcement Program Operating Plan. In preparing this guidance, EBSA assesses current enforcement activities, identifies recent enforcement trends, analyzes available information regarding industry activities and areas of noncompliance, and reviews current policy considerations to identify possible areas of potential risk within the employee benefit plan industry. EPA officials told us they use their National Enforcement Initiatives to prioritize resources to compliance concerns that are particularly entrenched or problematic. Further, EPA initiated its Next Generation Compliance (NextGen) strategy to structure regulations and permits with new monitoring and information technology, expanded transparency, and innovative enforcement activities. NextGen was designed to increase transparency and real time information made possible by electronic reporting and advanced monitoring, and allows the agency and its stakeholders the opportunity to experiment with innovative approaches. Furthermore, EPA stated that it and its stakeholders are better able to identify and solve environmental issues, and address large regulated communities with approaches that go beyond traditional single facility inspections and enforcement. Selected Agencies Have Made Efforts to Make Compliance Data Transparent and Accessible Transparency and availability of data are important to promoting compliance and achieving regulatory objectives. The selected agencies that we reviewed have made efforts to make compliance and enforcement information more transparent and accessible to the public, including: All the Labor subcomponents we reviewed made efforts to make data and information more publically accessible. MSHA developed online compliance tools that allow the public to monitor a mine’s compliance with key safety and health standards by providing a broad range of mine safety and health data, including information about mine inspections, accidents, injuries, illnesses, violations, employment, production totals, and air sampling. One of these tools is the “Rules to Live By Calculator,” which focuses on the 49 safety standards most often associated with fatal mining accidents and serious injuries. EPA’s Enforcement and Compliance History Online (ECHO) database provides integrated compliance and enforcement data for over 800,000 regulated facilities on air emissions, surface water discharges, hazardous waste, and drinking water systems. The database includes EPA, state, local, and tribal environmental agency compliance and enforcement records that are reported into national databases. ECHO also incorporates EPA environmental data sets to provide additional context for analyses. CMS created a “Nursing Home Compare” website to assist consumers in comparing information about nursing homes. The website contains detailed information on the quality of care and staffing information for more than 15,000 Medicare- and Medicaid- participating nursing homes including a five-star scale of quality ratings of overall and individual performance on health inspections, quality measures, and hours of care provided per resident by staff performing nursing care tasks. Selected Agencies Reported They Evaluated Regulatory Decisions by Collecting Feedback, and Responses to Identified Noncompliance Varied Selected Agencies Supplement Feedback on Effectiveness of Regulatory Design and Enforcement Approaches with Evaluations While agency officials receive feedback on their regulations during rulemaking, they also have opportunities to receive feedback during implementation of the regulation and as part of later retrospective review efforts. In 2007 and 2014, we reported on retrospective reviews of individual regulations, which agencies use to evaluate how existing regulations work in practice. As mentioned previously, two executive orders issued in 2017 also emphasize the importance of retrospective review, and officials from two agencies told us that they are currently examining their regulatory evaluation processes in response to these directives. To supplement retrospective review efforts, officials told us that they collect feedback from both internal and external stakeholders on the effectiveness of their regulatory design and enforcement decisions. This feedback may occur during rulemaking or during implementation, and might prompt changes. For example: EPA officials told us they provide opportunities for regulated entities to give feedback, and that they may reconvene the initial Regulatory Working Group for a rule if they heard complaints or concerns. At DOT, FAA officials told us they collect feedback about potential needs to update or change rules through requests for exemptions and through their various advisory committees. According to PHMSA officials, advisory committee inputs or petitions are two ways they evaluate the success of their regulations. MSHA officials told us that in response to comments received during rulemaking, they changed their rule on proximity detection systems for continuous mining machines, which protects miners from being struck by such machines. MSHA initially proposed specifying certain requirements for a technology but used a performance-based approach in its final rule. This experience subsequently informed MSHA’s proposed design for its new rule for proximity detection systems for mobile machines, in which the agency proposed a performance standard from the outset of the rulemaking. A BIS enforcement official told us that his office requested a revision to an existing regulation that was difficult to enforce because it did not provide clear requirements for how companies could determine when a government-identified “red flag”—a party on BIS’ Unverified List— could be resolved. BIS received similar feedback from advisory committees and revised the regulation for clarity. According to APHIS officials, they evaluate the effectiveness of their compliance and enforcement activities by tracking compliance rates under the Animal Welfare Act and through feedback from their regulated entities. USDA officials also stated that interactions with inspectors and listening sessions provide the department’s agencies with feedback. Selected agency officials cited concerns about changing the design of established regulatory programs and the resources required for the rulemaking process. Two of our selected agencies mitigated these concerns by piloting new regulatory designs. USDA implemented an ongoing project—the HACCP Inspection Models Project—to assess the viability of applying potential performance-based regulations to ensure food safety at hog and poultry processing facilities. After assessing inspection findings for the poultry pilot project and in response to public comments on the program, they ultimately determined that the regulation should be broadened to additional facilities. FAA used feedback from pilot studies, in which more than 30 public-use airports participated, to inform a proposed rule for Airport Safety Management Systems. Agencies also typically have flexibility to continue to change and adjust their compliance and enforcement strategies in response to feedback and evaluation without going through the rulemaking process to amend a final regulation. As previously mentioned, agencies assess the effectiveness of their enforcement and compliance efforts by collecting data to target their enforcement efforts. In addition, selected agencies identified evaluations of their enforcement and compliance efforts, including: DOL’s Chief Evaluation Office officials told us they work with Labor components to (1) develop and implement research studies, (2) address how collected information is used to assess effectiveness, and (3) support data analysis to inform management decision making. For example, the office worked with OSHA to pilot changes to issuing and following up citations to increase employer responsiveness. The study, which began in 2015, found that employers who were part of the new citation process, which included elements such as a handout during inspections, postcard reminders, and a follow-up call, were 3.9 percentage points more likely to engage with OSHA. EPA’s Office of Enforcement and Compliance Assistance wrote a guide for EPA managers and staff on their integrated strategic approach to effectively eliciting compliance, focusing on compliance assistance, incentives, monitoring, enforcement, and other tools. EPA has also conducted research on what makes a regulation more likely to be complied with and identified principles and tools to aid in writing more effective regulations. For example, EPA directs rule drafters to use clear and objective regulatory requirements and applicability criteria, to structure regulations to make compliance easier than noncompliance, and to leverage regulated entities and/or third parties to assess compliance and prevent noncompliance. It also encourages agency officials to leverage accountability and transparency through e-reporting to government and public access to data on websites. According to PHMSA officials, they developed formal enforcement goals, strategies, and metrics after reviewing leading practices for enforcement, including reviewing the compliance strategies at other DOT subcomponents. They analyzed data to identify commonalities between violations that are causal to incidents, as well as those that increased the severity of incidents. They also reviewed enforcement data to identify guidance that needs to be improved, provide feedback to inspectors, and ultimately provide ideas for improved rulemaking and regulatory design. Selected Agency Responses to Continued Widespread Noncompliance Varied Selected agencies responded differently when they identified continued widespread noncompliance through their evaluations or monitoring of compliance data. Some agencies told us they view a record of noncompliance as a fault in the regulation and may update their regulatory design, while others may change compliance strategies. FSIS officials told us they use enforcement data to analyze the effectiveness of their regulations, and may make changes to their regulations based on trends in noncompliance. According to PHMSA officials, they analyze enforcement data in several ways, including identifying regulations with the highest rates of noncompliance to understand weaknesses in individual regulations. MSHA officials told us that when an Inspector General audit found that its enforcement actions were not strong enough for repeat violators, the agency updated its Pattern of Violations regulation to better attain compliance. Conversely, OSHA officials told us that they view persistent noncompliance or workplace injuries and illness as indicating a need to revisit and readdress how compliance assistance is being provided and enforcement applied, rather than as a reason to adjust the regulation. EPA officials told us that they will update an existing regulation to solve an ongoing compliance problem only as a last resort due to the large resource investment required and disruption to regulated entities to adapt to changes in regulatory design. Key Considerations Could Strengthen Agency Regulatory Design and Enforcement Decisions We built upon current statutory and executive requirements and selected agencies’ current practices to identify key considerations to strengthen agency processes for regulatory design and enforcement decisions. As agency officials craft regulations, they are guided by high-level statutory requirements, economic principles in executive orders, and OMB directives and resources. In accordance with those directives, our selected agencies have implemented varied practices to facilitate their regulatory design and enforcement decisions. Based on our review of those directives and the selected agencies’ processes, as well as academic and practitioner research, past IG work and our own past work, and existing criteria and resources for federal managers, we identified key considerations for regulatory design and compliance to aid decision makers in designing—or redesigning—their regulations and determining how best to elicit compliance. The following key considerations for regulatory design and compliance in figure 1 are intended to serve as a resource to supplement existing directives and guidance. We identified these considerations to bridge the gap between high-level directives and current agency practices. These considerations can provide criteria for decision makers to identify, assess, and evaluate options for achieving their regulatory objectives. Further, we have offered elements for each consideration as concrete questions that agencies can ask themselves as they design their regulatory approaches to elicit compliance within statutory authority and available resources. Not all considerations are applicable in every instance. We recognize there are tradeoffs inherent in any choice, but we believe that these key considerations can strengthen agency decision making, resulting in more informed designs, plans for evaluations, and ongoing changes to compliance and enforcement approaches. We provided a draft of this report to the Secretaries of Agriculture, Commerce, Health and Human Services, Labor, and Transportation, the Administrator of the Environmental Protection Agency and the Director of the Office of Management and Budget for comment. The Departments of Agriculture, Health and Human Services, and Labor and the Environmental Protection Agency provided technical comments that were incorporated as appropriate. The Departments of Commerce and Transportation and the Office of Management and Budget did not provide comments. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretaries of Agriculture, Commerce, Health and Human Services, Labor, and Transportation; the Administrator of the Environmental Protection Agency; the Director of the Office of Management and Budget; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6806 or krauseh@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Objectives, Scope, and Methodology You asked us to review how agencies make key decisions related to regulatory design, compliance and enforcement, and updating of regulations. This report describes how selected agencies report (1) making decisions on regulatory designs among available options, (2) making decisions to designate resources among available compliance and enforcement activities, and (3) evaluating those decisions, and also identifies (4) key considerations for decision makers related to regulatory design and enforcement. To describe agency experiences and decisions regarding regulatory design and compliance and how they evaluate those decisions, we reviewed regulatory processes at 6 departments and 13 subcomponents within those departments. To illustrate a wide range of regulatory designs and resulting compliance activities, we selected the six executive branch departments—excluding the Department of Defense—that promulgated the most significant regulations between September 1, 2011 and August 31, 2016. These departments were the United States Departments of Agriculture (USDA), Commerce (Commerce), Health and Human Services (HHS), Labor (Labor), and Transportation (DOT), and the Environmental Protection Agency (EPA). Among other inputs, the selected departments were also among those that most often promulgated regulations that were anticipated to affect small entities (such as small businesses, nonprofits, and governments) during the same time period. We used reginfo.gov to identify the number of significant regulations. We assessed the reliability of those data by reviewing relevant documentation, interviewing knowledgeable agency officials, and electronically and manually testing the data for missing values, outliers, and invalid values, and we found the data to be sufficiently reliable for the purpose of identifying selected departments. The experiences of these selected executive branch departments are illustrative and nongeneralizable. From these departments, we selected subcomponents for nongeneralizable case studies. These subcomponents were selected based on information provided by department officials engaged in regulatory activities on their departmental subcomponents’ use of a variety of regulatory designs and any experience making changes to their regulatory design or compliance strategies based on new information (such as evaluations) or new circumstances (such as evolving technologies or changes in agency resources for compliance). We also asked department officials about subcomponents’ use of compliance activities other than traditional compliance assistance and enforcement. To further inform our selection of subcomponents, we reviewed past Inspector General and our own work on types of regulatory designs and compliance strategies. We did not include independent regulatory agencies in our scope as they are not subject to directives from the Office of Management and Budget’s (OMB) Office of Information and Regulatory Affairs. Furthermore, many independent agencies promulgate and administrate financial regulations, which present different considerations and have been the focus of other work we performed. In reviewing enforcement strategies used by agencies, we did not review federal regulatory programs for which enforcement has been delegated to states or localities. To illustrate how our selected agencies make decisions regarding regulatory design and compliance and how they evaluate those decisions, we reviewed agency written procedures and interviewed department and subcomponent officials on their practices for making these decisions. To develop themes and examples from our documentary and testimonial evidence, we analyzed information from relevant documents and interviews to identify and confirm common patterns as well as differences across selected agencies. These experiences illustrate how the selected agencies currently make these decisions, the outcomes of those decision- making processes, and their evaluation practices. To identify key considerations for decision makers related to regulatory design and enforcement, we reviewed existing criteria documents, including (1) elements of the Regulatory Flexibility Act; (2) applicable executive orders and guidance such as Executive Order 12866 and OMB Circulars A-4, A-11, and A-123; and (3) resources for federal managers, and leading practices we had previously reported on for enterprise risk management. To ensure that our considerations incorporated applicable academic and government research and findings we conducted a literature review. Our literature review incorporated searches of several academic, literature, and government sources—including bibliographic databases such as ProQuest, Scopus, Academic OneFile, Public Affairs Information Service, and LexisNexis—for articles or studies published from January 2011 through August 2016. The team searched for articles using several combinations of relevant key words such as: “regulatory design,” “regulatory structure,” “regulatory compliance,” and “regulatory enforcement.” We then identified the articles that were relevant to our objectives based on the independent review of two team analysts. In addition, we searched our own and selected federal Inspector General websites for any reports relevant to our objectives. These searches were not meant to be a comprehensive search of all available literature on the topic, but rather conducted to identify relevant work to inform our identification of key regulatory design and enforcement considerations for decision makers. We developed a data collection instrument for each of the academic and government literature search sources and our own reports. To analyze and summarize the results of the academic literature search, two analysts independently reviewed each relevant record in the search results to document information that was relevant to our objectives and to identify key themes to inform our key considerations. We reviewed all relevant articles and reports and summarized information in the data collection instrument that related to the following topics: regulatory design; regulatory design principles; enforcement and compliance; enforcement and compliance principles; regulatory or subject matter area; and general observations that were relevant to the engagement’s objectives. In addition, we reviewed the annotated citations and references in selected articles to identify additional articles to include in the literature review and ensure that we were not omitting key literature related to regulatory design and enforcement. After applying identified criteria—including key practices and elements of those practices—to decision making about regulatory design and compliance, we obtained input on those considerations with officials from our selected agencies and with subject matter specialists. We initially selected and interviewed relevant specialists based on the results of our literature review (i.e., the authors of relevant articles or books included in our review). Based on suggestions from those individuals, we expanded our list of specialists and conducted a second round of interviews, ultimately speaking with 14 specialists. These considerations were also refined by the current practices and approaches of the selected agencies we reviewed. We conducted this performance audit from August 2016 to October 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Heather Krause at (202) 512-6806 or krauseh@gao.gov. Acknowledgments In addition to the contact named above, key contributors to this report were Tim Bober, Assistant Director, Alexandra Edwards, Danny Berg, and Travis Hill. In addition, John Hussey, Timothy Guinane, Andrea Levine, Kayla Robinson, Robert Robinson, and Cynthia Saunders provided key assistance.
Why GAO Did This Study Within the limits of their statutory authority, agencies may design their regulations in different ways to achieve intended policy outcomes. Agencies also decide how they will promote compliance with their regulations and ensure that regulated entities are informed of regulatory requirements. GAO was asked to review how agencies make regulatory design and enforcement decisions. This report describes how selected agencies report (1) making decisions on regulatory designs among available options, (2) making decisions to designate resources among available compliance and enforcement activities, and (3) evaluating those decisions, and also identifies (4) key considerations for decision makers related to regulatory design and enforcement. To describe how agencies make and evaluate these decisions, GAO reviewed regulatory processes and spoke with officials at six executive departments—the Departments of Agriculture (USDA), Commerce, Health and Human Services (HHS), Labor (Labor), and Transportation and the Environmental Protection Agency (EPA)—based on volume of significant rulemaking, and 13 subcomponents within those departments. To identify key considerations for regulatory decision makers, GAO reviewed existing criteria, including statutory and Executive requirements, conducted a literature review, and obtained input on identified considerations with subject matter specialists. GAO is not making any recommendations in this report. USDA, HHS, Labor, and the EPA provided technical comments that were incorporated as appropriate. What GAO Found Agencies have multiple available regulatory designs. Selected agency processes for choosing among them are informed by statutory and Executive requirements, regulatory objectives, and statutory discretion. Officials reported a preference for “performance” designs that establish an outcome but allow flexibility in how to achieve it, but stated that in some cases their objectives could require use of more prescriptive “design-based” regulations that specify a certain required technology or action. Officials at all selected agencies stated that they discuss potential regulatory designs internally, but some agency processes also included practices such as documentation of identified design options and assessments of the options' risks and enforcement implications. Selected agencies used multiple tools and approaches for allocating resources to elicit compliance. Agencies generally have flexibility to use a mix of tools, including providing compliance assistance to help regulated entities understand requirements, and monitoring and enforcement through inspections. Selected agency processes to allocate compliance resources vary, and agencies reported using collected data to target enforcement resources to address risks. Selected agencies supplemented feedback on effectiveness of their regulatory design and enforcement approaches with evaluations, which agency officials said could prompt changes. When agencies identify noncompliance, selected agencies may update their regulation or their compliance strategy. GAO identified key considerations to strengthen agency decisions related to regulatory design and enforcement (see figure). These build on current directives, academic research, and the experiences of selected agencies and are intended to serve as a resource for decision makers in designing—or redesigning—their regulations and determining how best to elicit compliance.
gao_GAO-19-96
gao_GAO-19-96_0
Background The Aviation and Transportation Security Act established TSA as the federal agency with primary responsibility for securing the nation’s civil aviation system, which includes acquiring technology to screen and secure travelers at the nation’s TSA-regulated airports. TSARA defines SRT as any technology that assists TSA in the prevention of, or defense against, threats to United States transportation systems, including threats to people, property, and information. As illustrated in figure 1, TSA acquired various SRT for passenger and baggage screening, including: Advanced Imaging Technology (AIT)—screens passengers for metallic and nonmetallic threats; Explosives Trace Detection—detects various types of commercial and military explosives through chemical analysis on passengers and their property; and Explosives Detection System (EDS)—provides imaging, screening, and detection capabilities to identify possible threats in checked baggage contents. DHS Acquisition Process TSA follows DHS’s policies and procedures for managing its acquisition programs, including for acquisition management, test and evaluation, and resource allocation of its SRT. TSA’s acquisition programs and policies are primarily set forth in DHS Acquisition Management Directive 102-01 (DHS’s acquisition directive) and DHS Instruction Manual 102-01-001, Acquisition Management Instruction/Guidebook. DHS acquisition policy establishes that an acquisition program’s decision authority should review the program at a series of predetermined acquisition decision events to assess whether the program is ready to proceed through the acquisition life cycle phases. An acquisition program is established once it has passed through the phases that establish the acquisition need and selects an option that meets this need. Figure 2 depicts the DHS acquisition life cycle. Under DHS’s acquisition directive, TSA is to ensure, among other things, that required acquisition documents are completed. Two of these key acquisition documents are: (1) the life cycle cost estimate, which provides an exhaustive and structured accounting of all resources and associated cost elements required to develop, produce, deploy, and sustain a program; and (2) the acquisition program baseline, which establishes a program’s cost, schedule, and performance metrics. These documents are used throughout the process to identify instances when an acquisition program exceeds cost, schedule, or performance thresholds. TSA’s acquisition policies, which supplement DHS policies, generally designate roles and responsibilities and identify the procedures that TSA is to use to implement the requirements in DHS policies. In December 2017, TSA reorganized its acquisition offices, which are responsible for implementing TSARA’s requirements, from two offices (Office of Acquisition and Office of Security Capabilities) into three offices: Requirements and Capabilities, Acquisition Program Management, and Contracting and Procurement. TSARA Requirements TSARA includes a number of requirements for TSA, including developing and submitting a biennial technology investment plan and annual small business contracting goals reports to Congress, adhering to various acquisition and inventory policies and procedures, and ensuring consistency with Federal Acquisition Regulation and departmental policies and directives. TSARA also includes requirements for justifying acquisitions and establishing acquisition baselines, which largely codify aspects of DHS’s existing acquisition policy described in DHS’s acquisition directive. TSA fulfills these requirements through the processes outlined in DHS’s acquisition directive when establishing a new acquisition program or modifying an existing acquisition program. See Appendix I for the list of TSARA’s requirements. TSA Generally Addressed TSARA Requirements Since 2016, TSA generally addressed TSARA requirements through its acquisitions policies and procedures. Since our February 2016 report, TSA has also developed and issued an updated technology investment plan. Further, TSA has continued to submit an annual report to Congress on TSA’s performance record in meeting its published small business contracting goals. TSA Policies and Procedures Continue to Address TSARA’s Requirements TSA continues to address TSARA’s requirements, including those related to acquisition justifications, baseline requirements, managing inventory and consistency with regulations. In addition, TSA developed an updated technology investment plan and submitted small business contracting goals reports to Congress in accordance with TSARA. Acquisition Justifications TSARA provides that before TSA implements any SRT acquisition, the agency must, in accordance with DHS policies and directives, conduct an analysis to determine whether the acquisition is justified. The analysis must include elements such as cost effectiveness and confirmation that there are no significant risks to human health or safety posed by the proposed acquisition, among others. In February 2016, we reported that DHS and TSA had policies and procedures that were in place prior to TSARA addressed each of the elements required in the analysis. For example, DHS’s acquisition directive includes several of these elements in its process for establishing a new acquisition program. TSARA also includes a provision requiring TSA to submit information (i.e. report) to Congress 30 days prior to the award of a contract for an SRT acquisition over $30 million. TSA established procedures that address this provision, as discussed later in this report, by developing a template for providing justifications under this provision. We found that, since 2016, TSA continues to have policies in place, such as DHS’s acquisition directive, to address the analysis-related requirements. TSA officials stated they would use these policies and procedures to address TSARA’s requirements. Baseline Requirements TSARA requires that before TSA implements any SRT acquisition, the appropriate acquisition official from the department shall establish and document a set of formal baseline requirements and subsequently review whether acquisitions are meeting these requirements. Additionally, TSARA provides that TSA must report a breach if results of any assessment find that (1) actual or planned costs exceed the baseline costs by more than 10 percent, (2) actual or planned schedule for delivery has been delayed more than 180 days, or (3) there is a failure to meet any performance milestone that directly impacts security effectiveness. Pursuant to TSARA, in March 2016, TSA reported two breaches to Congress for the Passenger Screening Program and Security Technology Integrated Program (STIP), a data management system that connects transportation security equipment to a single network. Further, in February 2016, we reported that TSA had policies in place that require it to prepare an acquisition program baseline, risk management plan, and staffing requirements before acquiring SRT, in accordance with TSARA requirements. We found that since our February 2016 report, TSA continues to leverage the existing DHS acquisition directive to meet all of TSARA’s baseline requirements. Managing Inventory TSARA requires that TSA, among other things: to the extent practicable, use existing units in inventory before procuring more equipment to fulfill a mission need; track the location, use, and quantity of security-related equipment in inventory; and implement internal controls to ensure accurate and up-to-date data on SRT owned, deployed, and in use. In 2016, we reported that TSA’s policies and procedures address TSARA requirements for managing inventory. We found that since our February 2016 report, TSA continues to use established policies and procedures to address TSARA’s inventory management requirements. For example, TSA continues to use the Security Equipment Management Manual, which describes the policies and procedures that require TSA to use equipment in its inventory if, for example, an airport opens a new terminal. Additionally, TSA has procedures to track the location, use, and quantity of security-related equipment in inventory, regardless of whether such equipment is in use. Specifically, TSA has procedures to track the entire life cycle of equipment, including initial possession, any moves, and disposal. Further, TSA continues to use standard operating procedures developed by its Internal Control Branch, which describe TSA’s system of internal controls to conduct reviews, report, and follow-up on corrective actions. Consistency with Regulations TSARA provides that TSA must execute its acquisition-related responsibilities in a manner consistent with and not duplicative of the Federal Acquisition Regulation and DHS policies and directives. In 2016, we reported that TSA’s policy documents state that TSA is required to ensure that its policies and directives are in accordance with the Federal Acquisition Regulation and DHS acquisition and inventory policies and procedures. We also reported that according to TSA’s TSARA Implementation Strategy Memo (implementation strategy memo), TSA was able to address this requirement by, among other things, forming a working group as part of an effort to ensure that TSA implemented TSARA in a manner consistent with the Federal Acquisition Regulation and DHS policies and directives. We found that no changes have been made to the implementation strategy memo since our 2016 report and TSA still has policies in place to execute the responsibilities set forth in TSARA in a manner consistent with and not duplicative of the Federal Acquisition Regulation and DHS policies and directives. TSA Developed an Updated Technology Investment Plan in Accordance with TSARA TSARA requires TSA to develop and submit to Congress a Strategic Five-Year Technology Investment Plan (technology investment plan) and update it on a biennial basis. The technology investment plan is to include, among other things, a set of SRT acquisition needs that includes planned technology programs and projects with defined objectives, goals, timelines and measures, and an identification of currently deployed SRTs that are at or near the end of their life cycles. In August 2015, TSA developed and submitted to Congress the first technology investment plan and in 2016 we reported that the 2015 plan generally addressed TSARA requirements. In December 2017, TSA developed and submitted to Congress an updated technology investment plan in accordance with TSARA. The updated plan details the aviation security efforts TSA initiated, developed, or completed since the initial plan was released. The updated plan also includes the extent to which TSA’s acquisitions were consistent with technology programs and projects identified in the initial plan, as required by TSARA. TSA officials stated that a positive effect of TSARA’s requirement to develop the technology investment plan has been the establishment of the Innovation Task Force. The task force, created in the Spring of 2016, is tasked to identify and demonstrate emerging capabilities and facilitate other innovative projects at select airports. TSA established the task force based on feedback from industry representatives provided during development of the initial plan. A TSA official who manages the task force said that it led to efficiencies in TSA’s acquisition process. The official noted, for example, that the task force began demonstrating Automated Screening Lanes in March 2016 and by October 2016 DHS approved additional deployments of the technology. For a video of TSA’s Innovation Task Force demonstration of Automated Screening Lanes, see the hyperlink in the note for figure 3. TSA Continues to Submit Required Small Business Reports to Congress TSARA requires TSA to submit an annual report to Congress on TSA’s performance record in meeting its published small business contracting goals during the preceding fiscal year. If the preceding year’s goals were not met or TSA’s performance was below the published small business contracting goals set for the department, TSARA requires that TSA’s small business report includes a list of challenges that contributed to TSA’s performance and an action plan, with benchmarks, for addressing each of the challenges identified that is prepared after consultation with other federal departments and agencies. Since our last review, TSA has submitted small business reports for fiscal years 2014 through 2017 and has reported achieving its small business contracting goals. TSA’s Narrow Application of TSARA Has Resulted in Limited Reporting to Congress on SRT- related Acquisitions Through July 2018, TSA’s narrow application of TSARA’s report and certification provision resulted in no SRT acquisitions being reported to Congress pursuant to TSARA. In August 2018, TSA provided its first three notifications on SRT acquisitions to Congress under this provision. None of the Over $1 Billion TSA Obligated to Acquire SRT and Associated Services From December 18, 2014 Through July 2018 Resulted in TSA Reporting Under TSARA TSA did not provide any information on contract awards or task or delivery orders for the acquisition of SRT and associated services to Congress under TSARA’s report and certification provision from enactment through July 2018. Under the provision, TSA is to provide Congress with a comprehensive justification and a certification that the benefits to transportation security justify the contract cost not later than 30 days preceding the award of a contract for any SRT acquisition over $30 million. Our analysis of FPDS-NG data on contract obligations from December 18, 2014 through July 2018 found approximately $1.4 billion in obligations for acquisitions of SRT and for services associated with the operation of SRT, as shown in table 1. Specifically, TSA obligated $591 million for SRT. For services associated with an SRT that are necessary to ensure its continuous and effective operation, such as maintenance and engineering support services, TSA obligated $772 million during this timeframe. TSA’s Policy for Implementing TSARA’s Report and Certification Provision Reflects a Narrow Application of the Act TSA officials said that none of the agency’s acquisition activities from enactment through July 2018 invoked TSARA’s report and certification provision because the activities did not align with TSA’s policy that identifies SRT acquisitions subject to this provision. TSA’s policy on what constitutes an SRT and the award of a contract for an SRT acquisition ultimately determine what acquisitions are subject to TSARA’s report and certification provision. See table 2 for TSA’s policy. TSA’s TSARA Implementation Strategy Memo states, “o support and ensure Congress is receiving the necessary information regarding critical TSA acquisitions, TSA will focus on security screening related technologies” which will ensure “the necessary actions are implemented for those technologies the public directly interacts with (i.e. is impacted by).” According to TSA officials, security screening related technologies, i.e. SRT, subject to TSARA must (1) be equipment or technology and (2) interact with (or impact) the public. Specific examples of SRT subject to TSARA, as identified by TSA officials, are the equipment typically deployed to airports to assist in the physical screening of passengers and their property, such as AIT, EDS, and boarding pass scanners. TSA officials explained that in accordance with its policy, TSA provided its first three notifications to Congress under TSARA’s report and certification provision in August 2018, more than 30 days prior to the award of three new SRT contracts, each with ceiling values in excess of $30 million. TSA Does Not Report SRT- Associated Services Under TSARA Since the enactment of TSARA through July 2018, TSA awarded multiple indefinite-delivery/indefinite-quantity (IDIQ) contracts and entered into a blanket purchase agreement for services associated with the operation of SRT, each with values in excess of $30 million, and issued orders under the contracts and agreement that exceeded $30 million. In accordance with TSA’s implementation policy, which applies to acquisitions of physical screening equipment, TSA did not report these acquisition actions under TSARA’s report and certification provision. TSA officials said, consistent with its implementation policy, that services associated with the operation of the SRT, such as engineering support, maintenance services, and other services described in table 1, are not SRT, as TSARA defines the term, because they are not equipment that directly interacts with the public. Associated services, however, are necessary to ensure the effective performance of SRT. For example, engineering support can assist in addressing changing security needs, such as through the development of threat detection algorithms and other software or hardware improvements. Associated services have also been used to extend the intended lifecycle of SRT already deployed to airport checkpoints. TSA officials said that research and development advancements have allowed TSA to upgrade existing equipment that had reached the end of its initial lifecycle rather than acquire new equipment. Further, TSA will likely need to increase spending on maintenance services because the equipment parts may break down when used past their intended life cycles. Consequently, through maintenance and hardware improvements, for example, TSA has been able to offset the need to procure new SRT by upgrading and maintaining existing SRT. Examples of contract actions for the associated services described in table 1 include: Maintenance Services: TSA awarded three IDIQ contracts in 2015 and 2016, with ceiling values ranging from $76 million to $222 million, and issued 10 orders under these IDIQ contracts with obligations that each exceeded $30 million; System Integration: TSA awarded three IDIQ contracts in 2015, each with a ceiling value of $450 million; STIP: In November 2017, TSA awarded a blanket purchase agreement with a ceiling value of $250 million; and Security Technology Support Services: TSA awarded three IDIQ contracts in 2017 with ceiling values ranging from $65 million to $169 million. The report of the Committee on Homeland Security of the House of Representatives on TSARA explains that the law introduces greater transparency and accountability for TSA spending decisions and codifies acquisition best-practices that the committee believes will result in more effective and efficient SRT acquisitions at TSA. As explained in the report, TSARA is, in part, a response to historical examples where TSA spent significant funds on SRT acquisitions that failed to meet security performance objectives or wasted federal funds. Consistent with the purpose of the statute expressed in the committee report, TSARA’s report and certification provision promotes greater transparency over TSA acquisition practices. TSA obligates a significant amount of funds—approximately $772 million from TSARA’s enactment through July 2018—for services that help ensure the effective and continuous operation of SRT. Applying TSARA’s report and certification provision to a broader range of services associated with the operation of SRT would provide Congress with increased transparency and improved oversight of TSA’s SRT acquisition practices. TSA Does Not Report SRT Task and Delivery Orders Under TSARA According to TSA’s TSARA implementation policy, indefinite-quantity contracts or blanket purchase agreements for “security screening related technology equipment”, i.e. SRT, are subject to TSARA’s report and certification provision when the ceiling value exceeds $30 million. The implementation policy also explains that the provision does not apply to individual task and delivery orders placed under these contracts or agreements. However, IDIQ contracts typically have a lengthy period of performance—for example one base year followed by four option years. Specifically, from December 18, 2014 through July 2018, all of TSA’s 14 active contracts for SRT were IDIQ contracts awarded prior to the enactment of TSARA on December 18, 2014. Further, 8 of the 14 contracts had been in place for 5 or more years, and according to TSA officials, the agency had extended the original period of performance for 9 of the 14 contracts. Per its implementation policy, TSA did not report to Congress under TSARA’s report and certification provision on the seven task orders, ranging from $31 million to $70 million, to purchase and install EDS, EDS upgrade kits, and explosives trace detection systems issued under IDIQ contracts in place at the time of TSARA’s enactment. See figure 4 for an example of an EDS IDIQ contract where TSA issued orders in excess of $30 million and extended the contract’s original period of performance. One of TSA’s most recent SRT contract awards further illustrates how TSA’s policy to only report on initial contract awards, and not orders issued pursuant to the contract, has resulted in limited reporting under TSARA’s report and certification provision. In September 2018, TSA awarded a new $500 million IDIQ contract for the acquisition of medium speed explosives detection systems. TSA reported this contract award to the requisite committees pursuant to the report and certification provision and consistent with its implementation policy. However, under TSA’s policy, this is the only notification that Congress will receive pursuant to TSARA over the course of the contract’s period of performance. For example, TSA also issued a $55 million order to purchase and install medium speed EDS units under this IDIQ contract, but per its implementation policy, TSA did not report on this order under the provision to Congress and per its policy would not do so for any subsequent orders during the contract’s period of performance. TSA has developed a policy with parameters for determining which contract actions are subject to TSARA. However, TSA’s policy limits the application of the report and certification provision only to initial contract awards for physical security screening equipment. According to TSA officials, TSA established this policy in order to ensure Congress is informed as early as possible that there is potential for an award in excess of $30 million as opposed to the point at which amounts awarded reach $30 million. However, the implementation policy expressly excludes orders in excess of $30 million issued under IDIQ contracts or blanket purchase agreements for SRT. Due to this narrow application of TSARA to its SRT acquisitions, TSA did not report any information to Congress pursuant to TSARA’s report and certification provision through July 2018. In addition, as currently implemented this policy will continue to result in TSA providing Congress with limited information in the future. As described earlier, TSARA was enacted to introduce greater transparency and accountability for TSA spending decisions. Because TSA’s policy for the report and certification provision excludes reporting on task and delivery orders, TSA misses the opportunity to inform Congress of the more routine SRT obligations that exceed TSARA’s $30 million threshold. In addition, applying TSARA’s report and certification provision to services that result in new capabilities, enhancements, or otherwise upgrade SRTs would provide Congress with increased transparency and improved oversight of TSA’s SRT acquisition practices. TSA Has Not Effectively Communicated Internally Its TSARA Implementation Decisions TSA has not effectively communicated its implementation decisions internally for what constitutes an SRT under TSARA. After the enactment of TSARA, TSA formed a working group to evaluate the act and develop an implementation strategy. The resulting policy is documented in TSA’s TSARA Implementation Strategy Memo, published in June 2015. According to TSA officials, the memo is the only formal document that describes TSA’s TSARA policy. Among other things, the memo designates roles and responsibilities for TSARA’s requirements and outlines TSA’s approach to implementing each requirement. To explain what constitutes an SRT for the purposes of TSARA, TSA officials described various parameters to us that guide their decision- making. However, not all of these parameters are documented in the implementation strategy memo. Specifically, the memo states that, “To support and ensure Congress is receiving the necessary information regarding critical TSA acquisitions, TSA will focus on security screening related technologies. This ensures the necessary actions are implemented for those technologies the public directly interacts with (i.e. is impacted by).” TSA officials clarified for us that technologies the public does not directly interact with or that do not otherwise impact the public in some physical manner, such as STIP and Secure Flight, are not considered SRT and thus not subject to TSARA, but this distinction is not clearly documented. Further, the memo does not explicitly explain which technologies are considered SRT and which are not. For example, TSA officials told us that SRT under TSARA excludes software such as updates to threat detection algorithms, and other associated services such as STIP, but this is not documented in the memo. TSA acquisition program staff are responsible for determining if a new acquisition qualifies as SRT under TSARA and initiating TSA’s congressional notification process. TSA officials stated that program staff rely upon the TSARA Implementation Strategy Memo to make these decisions. During our review, TSA’s acquisition program staff were initially unable to confirm in all instances whether the security-related equipment they had acquired were subject to TSARA. Over the course of our review, TSA officials clarified the application of TSARA’s SRT definition to us and based on our inquiries, confirmed a list of existing technologies that are considered SRT. However, this information has not been documented in the TSARA Implementation Strategy Memo. TSA officials explained that there was a lot of activity after TSARA was initially enacted to determine how to comply with TSARA, but after the implementation working group disbanded, activity subsequently faded. Consequently, the implementation strategy memo has not been updated since its initial distribution in June 2015. TSA officials stated that they plan to update the implementation strategy memo by the end of calendar year 2018 to reflect the new offices responsible for implementing TSARA’s requirements due to an internal reorganization. Effective information and communication are vital for an entity to achieve its objectives. Standards for Internal Control in the Federal Government states that management should document policies in the appropriate level of detail and internally communicate the necessary quality information to achieve the entity’s objectives. In the absence of a policy that clearly states what constitutes an SRT and with several large acquisitions pending, TSA may be missing an opportunity to ensure effective and consistent implementation of TSARA. Conclusion TSA spends hundreds of millions of dollars each year developing, acquiring, deploying, and maintaining technologies in furtherance of its mission to ensure civil aviation security. Through TSARA, Congress sought to address challenges faced by TSA in effectively managing its acquisitions and procurements by specifying measures for TSA to implement that align with identified acquisition best practices and increase the transparency and accountability of TSA’s SRT acquisitions. Overall, TSA has policies and procedures in place to accomplish many of the reforms sought by TSARA, but more could be done to improve the transparency of its spending on SRTs. Specifically, reporting on individual task and delivery orders as well as associated services under TSARA’s report and certification provision would help TSA ensure that Congress has timely information it could use to effectively oversee TSA acquisitions. TSA took a positive step towards greater transparency on SRT spending with its first notifications to Congress in August 2018—in accordance with its policy—, but TSA’s existing policy does not require similar notification for associated services or for individual task and delivery orders issued that exceed $30 million. Further, while TSA developed the TSARA Implementation Strategy Memo, which serves as TSA’s policy for implementing TSARA, designated roles and responsibilities for TSARA’s requirements, and outlined TSA’s approach to implement each requirement, TSA has not clearly documented and internally communicated its parameters on what constitutes an SRT under TSARA. With several large acquisitions pending, clear guidance would better assure that staff understand how TSARA’s reporting requirements apply. In the absence of updated internal policy to clearly communicate what is or is not an SRT, TSA will continue to be at risk of inconsistent and incomplete implementation of TSARA. Recommendations for Executive Action We are making the following three recommendations to TSA: The TSA Administrator should revise TSA’s policy to require that TSA also submit information under TSARA’s report and certification provision prior to the award of contracts and blanket purchase agreements for services associated with the operation of security-related technology, such as maintenance and engineering services, that exceed $30 million. (Recommendation 1) The TSA Administrator should revise TSA’s policy to require that TSA also submit information under TSARA’s report and certification provision prior to the issuance of individual task and delivery orders for security- related technology acquisitions that exceed $30 million. (Recommendation 2) The TSA Administrator should clarify and document what constitutes an SRT under TSARA as part of the planned update of TSA’s TSARA implementation policy. (Recommendation 3) Agency Comments and our Evaluation We provided a draft of this product to DHS for comment. In its comments, reproduced in appendix II, DHS generally concurred with each of the three recommendations and described steps it plans to take to implement them. TSA also provided technical comments, which we incorporated as appropriate. While DHS concurred with our recommendation to revise TSA's policy to include reporting on contracts over $30 million for services associated with the operation of security-related technology, in its letter, DHS stated that not all services associated with an SRT should be subject to TSARA's reporting requirements. Specifically, it noted that TSA will revise policy language and instructions to ensure that the justification analysis and certification analysis required under TSARA is submitted prior to the award contracts and blanket purchase agreements for services that would result in new capabilities, enhancements, or otherwise upgrade SRT. It distinguishes these services from services that are indirectly related to the SRT or used to keep the SRT operational, such as deployment and system integration. We agree with this distinction and do not consider all of the associated services mentioned in this report as necessary for inclusion in TSA’s revised policy. Further, we recognize that TSA, in conjunction with feedback from Congress, is best positioned to determine the services included in its revised policy for reporting under TSARA, consistent with its interest in avoiding duplicative or administratively burdensome reporting and delays in the acquisition process. We are encouraged by DHS’s plans to implement this recommendation and its recognition that the additional information will provide Congress with increased transparency and an opportunity for more effective oversight of TSA’s acquisitions. DHS also described planned actions to address our recommendation to revise TSA’s policy to include reporting on individual task and delivery orders that exceed $30 million. DHS expects to complete the revisions by September 30, 2019. If implemented, this action should provide Congress with greater transparency over TSA’s SRT acquisitions. DHS also noted that, in accordance with our recommendation to update its implementation guidance, it plans to (1) clarify and document what constitutes an SRT under TSARA and (2) document all offices responsible for implementing TSARA’s requirements in its TSARA implementation strategy memo by September 30, 2019. If implemented, guidance that is clear and documented will better assure that staff across all DHS offices will understand how to consistently implement TSARA. We are sending copies of this report to the appropriate congressional committees and the Secretary of Homeland Security. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8777 or russellw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in Appendix III. Appendix I: Transportation Security Acquisition Reform Act Requirements In tables three through eight, we identify the requirements of the Transportation Security Acquisition Reform Act (TSARA), as enacted on December 18, 2014. Appendix II: Comments from the Department of Homeland Security Appendix III: GAO Contact and Staff Acknowledgements GAO Contact Staff Acknowledgements In addition to the contact named above, Kevin Heinz (Assistant Director), Amber Edwards (Analyst-in-Charge), Winchee Lin, Cristina Norland, Richard Hung, Thomas Lombardi, Amanda Miller, and Richard Cederholm made key contributions to this report.
Why GAO Did This Study Enacted in December 2014, TSARA introduced legislative reforms to promote greater transparency and accountability in TSA's SRT acquisitions. TSARA contains a provision that GAO submit two reports to Congress on TSA's progress in implementing TSARA. In February 2016, GAO issued the first report that found TSA had taken actions to address TSARA. This second report examines TSA's (1) progress in addressing TSARA requirements since 2016, (2) reporting to Congress on SRT acquisitions, and (3) internal communication of its implementation decisions. GAO examined TSARA and TSA documents and guidance; analyzed TSA contract data and reports from TSARA's enactment in December 2014 through July 2018 and September 2018, respectively; and interviewed DHS and TSA officials on actions taken to implement TSARA. GAO also conducted interviews with TSA officials on parameters for reporting on SRT acquisitions. What GAO Found Since 2016, the Transportation Security Administration (TSA) generally addressed Transportation Security Acquisition Reform Act (TSARA) requirements through its policies and procedures for acquisition justifications, baseline requirements, and management of inventory. TSA also, among other actions, submitted a technology investment plan and annual small-business contracting goals reports to Congress as required. Since December 2014, TSA reported limited security-related technology (SRT) acquisitions to Congress under TSARA, submitting its first report in August 2018. TSARA contains a report and certification provision pursuant to which TSA is to submit information to Congress 30 days prior to the award of a contract for an SRT acquisition exceeding $30 million. Through July 2018, TSA obligated about $1.4 billion on SRT and associated services. TSA officials explained that none of these obligations—including 7 SRT orders, each in excess of $30 million—invoked the report and certification provision because the obligations did not align with TSA's implementation policy, which provides that the $30 million threshold relates to the contract ceiling of the initial SRT contract and not to individual task and delivery orders. Revising TSA's policy to include contracts for services that enhance the capabilities of SRT, including any orders for SRT and associated services in excess of $30 million, would better ensure that Congress has the information it needs to effectively oversee TSA's SRT acquisitions. TSA has not effectively communicated internally its implementation decisions for what constitutes an SRT under TSARA. TSA officials described to GAO that SRT must be equipment that is public facing, but TSA's policy does not clearly state the parameters of what is considered an SRT. Without clear guidance, TSA staff may be unaware of these parameters and how they apply to future acquisitions under TSARA. For example, TSA acquisition program staff were initially unable to confirm for GAO whether the technologies TSA had acquired were SRTs and thus subject to TSARA. Updating TSA policy to include detailed parameters for what constitutes an SRT would better ensure consistency in applying the act. What GAO Recommends GAO recommends that TSA revise its policies for the report and certification provision of TSARA to include reporting on task and delivery orders and services associated with SRT, and clarify in policy what constitutes an SRT under TSARA. DHS generally concurred with the recommendations and described steps it plans to take to implement them.
gao_GAO-18-598T
gao_GAO-18-598T_0
Background Among health care programs, Medicaid is the largest as measured by enrollment (over 73 million in fiscal year 2017) and the second largest as measured by expenditures ($596 billion in fiscal year 2017), second only to Medicare. The CMS Office of the Actuary projected that Medicaid spending would grow at an average rate of 5.7 percent per year, from fiscal years 2016 to 2025, with projected Medicaid expenditures reaching $958 billion by fiscal year 2025. This projected growth in expenditures reflects both expected increases in expenditures per enrollee and in levels of Medicaid enrollment. Beneficiaries with disabilities and those who are elderly constitute the highest per enrollee expenditures, which are projected to increase by almost 50 percent from fiscal year 2016 to 2025. Medicaid enrollment is also expected to grow by as many as 13.2 million newly eligible adults by 2025—as additional states may expand their Medicaid programs to cover certain low-income adults under the Patient Protection and Affordable Care Act (PPACA). (See fig. 1.) The partnership between the federal government and states is a central tenet of the Medicaid program. CMS provides oversight and technical assistance for the program, and states are responsible for administering their respective Medicaid programs’ day-to-day operations—including determining eligibility, enrolling individuals and providers, and adjudicating claims—within broad federal requirements. Federal oversight includes ensuring that the design and operation of state programs meet federal requirements and that Medicaid payments are made appropriately. (See fig. 2 for a diagram of the federal-state Medicaid partnership framework.) Joint financing of Medicaid is also a fixture of the federal-state partnership, with the federal government matching most state Medicaid expenditures using a statutory formula based, in part, on each state’s per capita income in relation to the national average per capita income. States have flexibility in determining how their Medicaid benefits are delivered. For example, states may (1) contract with managed care organizations to provide a specific set of Medicaid-covered services to beneficiaries and pay the organizations a set amount, generally on a per beneficiary per month basis; (2) pay health care providers for each service they provide on a fee-for-service basis; or (3) rely on a combination of both delivery systems. Managed care continues to be a growing component of the Medicaid program. In fiscal year 2017, expenditures for managed care were $280 billion, representing almost half of total program expenditures, compared with 42 percent in fiscal year 2015. (See fig. 3.) States also have the flexibility to innovate outside of many of Medicaid’s otherwise applicable requirements through Medicaid demonstrations approved under section 1115 of the Social Security Act. These demonstrations allow states to test new approaches to coverage and to improve quality and access, or generate savings or efficiencies. For example, under demonstrations, states have extended coverage to certain populations, provided services not otherwise eligible for federal matching funds, made incentive payments to providers for delivery system improvements. As of November 2016, nearly three-quarters of states have CMS- approved demonstrations. In fiscal year 2015, total spending under demonstrations represented a third of all Medicaid spending nationwide. (See fig. 4.) In addition to other types of improper payments, Medicaid presents opportunities for fraud, because of the size, expenditures, and complexities of the program—including the variation in states’ design and implementation. Medicaid Fraud Control Units (MFCU)—state entities responsible for investigating and prosecuting Medicaid fraud—have reported on Medicaid fraud convictions and recovered monies, in their annual reports. For example, over the past 5 years, MFCUs have reported an average of 1,072 yearly Medicaid fraud convictions. They also reported about $680 million in recoveries related to fraud in fiscal year 2017—almost double the recoveries from fiscal year 2016. Three Broad Areas of Risk Threaten the Fiscal Integrity of Medicaid Our prior work has identified three broad areas of risk to the fiscal integrity of Medicaid: improper payment rates, state use of supplemental payments, and oversight of demonstration programs. Estimated Improper Payments Exceed 10 Percent, and Do Not Fully Account for All Program Risks CMS annually computes the national Medicaid improper payment estimate as a weighted average of states’ improper payment estimates for three component parts—fee-for-service, beneficiary eligibility determinations, and managed care. The improper payment estimate for each component is developed under its own methodology. The national rate in fiscal year 2017 was 10.1 percent, or $36.7 billion. Since 2016, Medicaid has exceeded the 10 percent criterion set in statute. As such, the program was not fully compliant with the Improper Payments Elimination and Recovery Act of 2010. In May 2018, we reported that the Medicaid managed care component of the improper payment estimate does not fully account for all program risks in managed care. We identified 10 federal and state audits and investigations (out of 27 focused on Medicaid managed care) that cited about $68 million in overpayments and unallowable managed care organization costs that were not accounted for by the managed care improper payment estimate. Another of these investigations resulted in a $137.5 million settlement to resolve allegations of false claims. We further noted that the full extent of overpayments and unallowable costs is unknown, because the 27 audits and investigations we reviewed were conducted over more than 5 years and involved a small fraction of the more than 270 managed care organizations operating nationwide as of September 2017. Some examples of the state audits that identified overpayments and unallowable costs include the following: The Washington State Auditor’s Office found that two managed care organizations made $17.5 million in overpayments to providers in 2010, which may have increased the state’s 2013 capitation rates. The Texas State Auditor’s Office found that one managed care organization reported $3.8 million in unallowable costs for advertising, company events, gifts, and stock options, along with $34 million in other questionable costs in 2015. The New York State Comptroller found that two managed care organizations paid over $6.6 million to excluded and deceased providers from 2011 through 2014. To the extent that such overpayments and unallowable costs are unidentified and not removed from the cost data used to set managed care payment rates, they may allow inflated future payments and minimize the appearance of program risks in Medicaid managed care. This potential understatement of the program risks in managed care also may curtail investigations into the appropriateness of managed care spending. The continued growth of Medicaid managed care makes ensuring the accuracy of managed care improper payment estimates increasingly important. In May 2018, we acknowledged that although CMS has increased its focus on and worked with states to improve oversight of Medicaid managed care; its efforts—for example, updated regulations and audits of managed care providers—did not ensure the identification and reporting of overpayments and unallowable costs. In May 2016, CMS updated its regulations for managed care programs, including that states arrange an independent audit of the data submitted by MCOs, at least once every 3 years. We found that although this requirement has the potential to enhance state oversight of managed care; CMS was reviewing the rule for possible revision of its requirements. We also noted that another effort to address program risks in managed care—the use of CMS program integrity contractors to audit providers that are paid by managed care organizations—has been limited. To address the program risks that are not measured as a part of CMS’s methodology to estimate improper payments, in May 2018 we recommended that CMS take steps to mitigate such risks, which could include revising its methodology or focusing additional audit resources on managed care. HHS concurred with this recommendation. Our prior work on Medicaid has also identified other program risks associated with provider enrollment and beneficiary eligibility that may contribute to improper payments. In table 1 below, we identify some examples of the previous recommendations we have made to address these types of program risks, and what, if any, steps CMS has taken in response to our recommendations. Lack of Transparency and Federal Oversight of States’ Use of Supplemental Payments Increase Program Risk Supplemental payments are payments made to providers—such as local government hospitals and other providers—that are in addition to the regular, claims-based payments made to providers for services they provided. Like all Medicaid payments, supplemental payments are required to be economical and efficient. Supplemental payments have been growing and totaled more than $48 billion in 2016. Our prior work has identified several concerns related to supplemental payments, including the need for more complete and accurate reporting, criteria for economical and efficient payments, and written guidance on the distribution of payments. Complete and accurate reporting. Our prior work has identified increased use of provider taxes and transfers from local government providers to finance the states’ share of supplemental payments, which, although allowed under federal law, effectively shift Medicaid costs from the states to the federal government. In particular, we previously reported in July 2014 that states’ share of Medicaid supplemental payments financed with funds from providers and local governments increased the federal share from 57 percent in state fiscal year 2008 to 70 percent in state fiscal year 2012. The full extent of this shift in states’ financing structure was unknown, because CMS had not ensured that states report complete and accurate data on the sources of funds they use to finance their share of Medicaid payments, and CMS’s efforts had fallen short of obtaining complete data. (See table 2 below for our recommendation and actions CMS has taken.) For example, in July 2014, we reported that in one state, a $220 million payment increase for nursing facilities resulted in an estimated $110 million increase in federal matching funds to the state, and a net payment increase to the facilities of $105 million. (See fig. 5.) Criteria for economical and efficient payments. Our prior work has demonstrated that CMS lacks the criteria, data, and review processes to ensure that one type of supplemental payments—non-DSH supplemental payments—are economical and efficient. For example, in April 2015, we identified public hospitals in one state that received such supplemental and regular Medicaid payments that, when combined, were hundreds of millions in excess of the hospitals’ total Medicaid costs and tens of millions in excess of their total operating costs—unbeknownst to CMS. Accordingly, we concluded that CMS’s criteria and review processes did not ensure that it can identify excessive payments and determine if supplemental payments are economical and efficient. (See table 2 below for our recommendations and actions CMS has taken.) Written guidance on the distribution of payments. According to CMS policy, Medicaid payments, including supplemental payments, should be linked to the provision of Medicaid services and not contingent on the provision of local funds. However, in February 2016 we reported that CMS did not have written guidance that clarifies this policy. In February 2016, we found examples of hospitals with large uncompensated costs associated with serving the low-income and Medicaid population that received relatively little in supplemental payments, while other hospitals with relatively low uncompensated care costs—but that were able to contribute a large amount of funds for the state’s Medicaid share— received large supplemental payments relative to those costs, raising questions as to whether CMS policies are being followed. (See table 2 for our recommendation and actions CMS has taken.) Recognizing that Congress could help address some of the program risks associated with supplemental payments, in November 2012, we suggested that Congress consider requiring CMS to improve state reporting of supplemental payments, including requiring annual reporting of facility-specific payment amounts; clarify permissible methods for calculating these supplemental payments; and implement annual independent certified audits to verify state compliance with methods for calculating supplemental payments. Subsequent to our work highlighting the need for complete and accurate reporting, in January 2017 a bill was introduced in the House of Representatives that, if enacted, would require annual state reporting of non-DSH supplemental payments made to individual facilities, require CMS to issue guidance to states that identifies permissible methods for calculating non-DSH supplemental payments to providers, and establish requirements for such annual independent audits. Another bill was introduced in October 2017 that would require states to submit annual reports that identify the sources and amount of funds used to finance the state share of Medicaid payments. As of May 2018, no action had been taken on either proposed bill. Absent Better Oversight, Demonstrations May Increase Federal Fiscal Liability Demonstration programs, comprising about one-third of total Medicaid expenditures in fiscal year 2015, can be a powerful tool for states and CMS to test new approaches to providing coverage and delivering services that could reduce costs and improve outcomes. However, our prior work has identified several concerns related to demonstrations, including the need for ensuring that (1) demonstrations meet the policy requirements of budget neutrality—that is, they must not increase federal costs—and (2) evaluations are used to determine whether demonstrations are having their intended effects. Budget neutrality of Medicaid demonstrations. Demonstration spending limits, by HHS policy, should not exceed spending that would have occurred in the absence of a demonstration. In multiple reports examining more than a dozen demonstrations between 2002 and 2017, we have identified a number of questionable methods and assumptions that HHS has permitted states to use when estimating costs. We found that federal spending on Medicaid demonstrations could be reduced by billions of dollars if HHS were required to improve the process for reviewing, approving, and making transparent the basis for spending limits approved for Medicaid demonstrations. The following are some examples of what we have previously found: In August 2014, we reported that HHS had approved a spending limit for Arkansas’s demonstration—to test whether providing premium assistance to purchase private coverage through the health insurance exchange would improve access for newly eligible Medicaid beneficiaries—that was based, in part, on hypothetical, not actual, costs. Specifically, the spending limit was based on significantly higher payment amounts the state assumed it would have to make to providers if it expanded coverage under the traditional Medicaid program, and HHS did not request any data to support the state’s assumptions. We estimated that by allowing the state to use hypothetical costs, HHS approved a demonstration spending limit that was over $775 million more than what it would have been if the limit was based on the state’s actual payment rates for services under the traditional Medicaid program. We also reported in August 2014 that HHS officials told us it granted Arkansas and 11 other states additional flexibility in their demonstrations in order to increase spending limits if costs proved higher than expected. We concluded that granting this flexibility to the states to adjust the spending limit increased the fiscal risk to the federal government. More recently, in April 2017, we reported that two states used unspent federal funds from their previous demonstrations to expand the scope of subsequent demonstrations by $8 billion and $600 million, respectively. We concluded that inflating the spending limits in this way inappropriately increased the federal government’s fiscal liability for Medicaid. We have previously made recommendations to improve oversight of spending on demonstrations, and HHS recently took action that partially responds to one of these recommendations. (See table 3 for examples of the recommendations and actions HHS has taken.) Specifically, under a policy implemented in 2016, HHS restricted the amount of unspent funds states can accrue for each year of a demonstration, and has also reduced the amount of unspent funds that states can carry forward to new demonstrations. For 10 demonstrations it has recently approved, HHS estimated that the new policy has reduced total demonstration spending limits by $109 billion for 2016 through 2018, the federal share of which is $62.9 billion. These limits reduce the effect, but do not specifically address all, of the questionable methods and assumptions that we have identified regarding how HHS sets demonstration spending limits. Evaluation of Medicaid demonstrations. In a January 2018 report, we questioned the usefulness of both state-led and federal evaluations of section 1115 demonstrations, particularly with regard to how these evaluation results may inform policy decisions. State-led evaluations. We identified significant limitations among selected state-led demonstration evaluations, including gaps in reported evaluation results for important parts of the demonstrations. (See table 4.) These gaps resulted, in part, from CMS requiring final, comprehensive evaluation reports after the expiration of the demonstrations rather than at the end of each 3- to 5-year demonstration cycle. In October 2017, CMS officials stated that the agency planned to require final reports at the end of each demonstration cycle for all demonstrations, although it had not established written procedures for implementing this new policy. We concluded in January 2018 that without written procedures for implementing such requirements, gaps in oversight could continue. Federal evaluations. Evaluations of federal demonstrations led by CMS have also been limited due to data challenges and a lack of transparent reporting. For example, delays obtaining data directly from states, among other things, led CMS to considerably reduce the scope of a large, multi-state evaluation, which was initiated in 2014 to examine the impact of state demonstrations in four policy areas deemed to be federal priorities. In our January 2018 report, we found that although CMS had made progress in obtaining needed data, CMS had no policy for making the results public. By not making these results public in a timely manner, we concluded that CMS was missing an opportunity to inform important federal and state policy discussions. In light of our concerns about state-led and federal demonstration evaluations, in January 2018, we recommended that CMS (1) establish written procedures for requiring final evaluation reports at the end of each demonstration cycle, (2) issue criteria for when it will allow limited evaluations of demonstrations, and (3) establish a policy for publicly releasing findings from federal evaluations of demonstrations. HHS concurred with these recommendations. Fundamental Actions Needed to Strengthen Oversight and Manage Program Risks Across our body of work, we have made 83 recommendations to CMS and HHS and suggested 4 matters for congressional consideration to address a variety of concerns about the Medicaid program. The agencies generally agreed with our recommendations and have implemented 25 of these recommendations to date, and CMS still needs to take fundamental actions in three areas—having more timely, complete, and reliable data; conducting fraud risk assessments; and strengthening federal-state collaboration—to strengthen Medicaid oversight and better manage program risks. More Complete, Timely, Reliable Data for Oversight An overarching challenge for CMS oversight of the Medicaid program is the lack of accurate, complete, and timely data. Our work has demonstrated how insufficient data have affected CMS’s ability to ensure proper payments, assess beneficiaries’ access to services, and oversee states’ financing strategies. As part of its efforts to address longstanding data concerns, CMS has taken some steps toward developing a reliable national repository for Medicaid data, most notably the Transformed Medicaid Statistical Information System (T-MSIS). Through T-MSIS, CMS will collect detailed information on Medicaid beneficiaries—such as their citizenship, immigration, and disability status—as well as any expanded diagnosis and procedure codes associated with their treatments. States are to report data more frequently—and in a timelier manner—than they have previously, and T-MSIS includes approximately 2,800 automated quality checks. The T-MSIS initiative has the potential to improve CMS’s ability to identify improper payments, help ensure beneficiaries’ access to services, and improve program transparency, among other benefits. As we reported in December 2017, implementing the T-MSIS initiative has been—and will continue to be—a multi-year effort. CMS has worked closely with states and has reached a point where nearly all states are reporting T-MSIS data. While recognizing the progress made, we noted that more work needs to be done before CMS or states can use these data for program oversight: All states need to report complete T-MSIS data. For our December 2017 report, we reviewed a sample of six states and found that none were reporting complete data. T-MSIS data should be formatted in a manner that allows for state data to be compared nationally. In December 2017, we reported that state officials had expressed concerns that states did not convert their data to the T-MSIS format in the same ways, which could limit cross- state comparisons. In our December 2017 report, we recommended that CMS take steps to expedite the use of T-MSIS data, including efforts to (1) obtain complete information from all states; (2) identify and share information across states to improve data comparability; and (3) implement mechanisms by which states can collaborate on an ongoing basis to improve the completeness, comparability, and utility of T-MSIS data. We also recommended that CMS articulate a specific plan and associated time frames for using T-MSIS data for oversight. The agency concurred with our recommendations, but has not yet implemented them. Our prior work has also noted areas where other data improvements are critical to program oversight: In July 2014, we found that there was a need for data on supplemental payments that states make to individual hospitals and other providers. In particular, our findings and related recommendation from July 2014 indicate that CMS should develop a data collection strategy that ensures that states report accurate and complete data on all sources of funds used to finance the states’ share of Medicaid payments. In January 2017, we found limitations in the data CMS collects to monitor the provision of, and spending on, personal care services— services that are at a high risk for improper payments, including fraud. In particular, data on the provision of personal care services were often not timely, complete, or consistent. Data on states’ spending on these services were also not accurate or complete. In January 2017, we recommended that CMS improve personal care services data by (1) establishing standard reporting guidance for key data, (2) ensuring linkage between data on the provision of services and reported expenditures, (3) ensuring state compliance with reporting requirements, and (4) developing plans to use data for oversight. The agency concurred with two recommendations and neither agreed nor disagreed with the other two recommendations, and has not yet implemented any. More Complete Fraud Risk Assessment and Better Fraud Targeting In December 2017, we examined CMS’s efforts managing fraud risks in Medicaid and compared it with our Fraud Risk Framework, which provides a comprehensive set of key components and leading practices that serve as a guide for agency managers to use when developing efforts to combat fraud in a strategic, risk-based way. This framework describes leading practices in four components: commit, assess, design and implement, and evaluate and adapt. (See fig. 6.) The Fraud Reduction and Data Analytics Act of 2015, enacted in June 2016, requires the Office of Management and Budget (OMB) to establish guidelines incorporating the leading practices from our Fraud Risk Framework for federal agencies to create controls to identify and assess fraud risks, and design and implement antifraud control activities. In July 2016, OMB published guidance, and among other things, this guidance affirms that managers should adhere to the leading practices identified in our Fraud Risk Framework. In a December 2017 report, we found that CMS’s efforts partially aligned with our fraud risk framework. In particular, CMS had shown a commitment to combating fraud, in part, by establishing a dedicated entity—the Center for Program Integrity—to lead antifraud efforts, and offering and requiring antifraud training for stakeholder groups, such as providers, beneficiaries, and health-insurance plans; and taken steps to identify fraud risks, such as by designating specific provider types as high risk and developing associated control activities. However, CMS had not conducted a fraud risk assessment for Medicaid, and had not designed and implemented a risk-based antifraud strategy. A fraud risk assessment allows managers to fully consider fraud risks to their programs, analyze their likelihood and impact, and prioritize risks. Managers can then design and implement a strategy with specific control activities to mitigate these fraud risks, as well as design and implement an appropriate evaluation. We concluded that through these actions, CMS could better ensure that it is addressing the full portfolio of risks and strategically targeting the most-significant fraud risks facing Medicaid. As a result, in December 2017 we made three recommendations to CMS, two of which were to conduct fraud risk assessments, and create an antifraud strategy for Medicaid, including an approach for evaluation. HHS concurred with our recommendations, but has not yet implemented them. Greater Federal-State Collaboration to Strengthen Program Oversight The federal government and the states play important roles in reducing improper payments and overseeing the Medicaid program, including overseeing spending on Medicaid supplemental payments and demonstrations. Our prior work shows that oversight of the Medicaid program could be further improved through leveraging and coordinating program integrity efforts with state agencies, state auditors, and other partners. Collaborative audits with state agencies. As we have previously reported, CMS has made changes to its Medicaid program integrity efforts, including a shift to collaborative audits—in which CMS’s contractors and states work in partnership to audit Medicaid providers. In March 2017, we reported that collaborative audits had identified substantial potential overpayments to providers, but barriers—such as staff burden or problems communicating with contractors—had limited their use and prevented states from seeking audits or hindered the success of audits. We recommended that CMS address the barriers that limit state participation in collaborative audits, including their use in managed care delivery systems. CMS concurred with this recommendation and has taken steps to address them for a number of states, but has not yet made such changes accessible to a majority of states. State auditors and federal partners. We have found that state auditors and the HHS-OIG offer additional oversight and information that can help identify program risks. To that end, we routinely coordinate our audit efforts with the state auditors and the HHS-OIG. For example, we have convened and facilitated meetings between CMS and state audit officials to discuss specific areas of concern in Medicaid and future opportunities for collaboration. The state auditors and CMS officials commented on the benefits of such coordination, with the state auditors noting that they can assist CMS’s state program integrity reviews by identifying program risks. State auditors also have conducted program integrity reviews to identify improper payments and deficiencies in the processes used to identify them. We believe that these reviews could provide insights into program weaknesses that CMS could learn from and potentially address nationally. Coordination also provides an opportunity for state auditors to learn methods for conducting program integrity reviews. The following are recent examples of reviews conducted: In 2017, the Oregon Secretary of State Audits Division found approximately 31,300 questionable payments to Coordinated Care Organizations (which receive capitated monthly payments for beneficiaries, similar to managed care organizations), based on a review of 15 months of data. In addition, the state auditor found that approximately 47,600 individuals enrolled in Oregon’s Medicaid program were ineligible, equating to $88 million in avoidable expenditures. Massachusetts’ Medicaid Audit Unit’s recent annual report (covering the time period from March 15, 2017, through March 14, 2018) reported that the state auditor identified more than $211 million in unallowable, questionable, duplicative, unauthorized, or potentially fraudulent billing in the program. A 2017 report released by the Louisiana Legislative Auditor’s Office stated that the office reviewed Medicaid eligibility files and claims data covering January 2011 through October 2016, and found $1.4 million in questionable duplicate payments. In fiscal year 2017, the Mississippi Division of Medicaid reported that they recovered more than $8.6 million through various audits of medical claims paid to health care providers. The division also referred seven cases to the state’s attorney general’s office, in which the division had identified $3.1 million in improper billing. At a May 2018 federal and state auditor coordination meeting that we participated in, the HHS-OIG provided examples of the financial impact of its work related to improper payments, including one review of managed care long term services and supports that identified $717 million potential federal savings, three reviews of managed care payments made after beneficiaries’ death that identified $18.2 million in federal funds to be recovered, and two reviews of managed care payments made for beneficiaries with multiple Medicaid IDs that identified $4.3 million in federal funds to be recovered. Healthcare Fraud Prevention Partnership. The Healthcare Fraud Prevention Partnership (HFPP) is an important tool to help combat Medicaid fraud. In 2012, CMS created the HFPP to share information with public and private stakeholders, and to conduct studies related to health care fraud, waste, and abuse. According to CMS, as of October 2017, the HFPP included 89 public and private partners—including Medicare—and Medicaid-related federal and state agencies, law enforcement agencies, private health insurance plans, and antifraud and other health care organizations. The HFPP has conducted studies that pool and analyze multiple payers’ claims data to identify providers with patterns of suspect billing across private health insurance plans. In August 2017, we reported that the partnership participants separately told us the HFPP’s studies helped them identify and take action against potentially fraudulent providers and payment vulnerabilities of which they might not otherwise have been aware, and fostered both formal and informal information sharing. Chairman Johnson, Ranking Member McCaskill, and Members of the Committee, this concludes my prepared statement. I would be pleased to respond to any questions you may have. GAO Contacts and Staff Acknowledgments If you or your staff members have any questions concerning this testimony, please contact Carolyn L. Yocom, who may be reached at 202-512-7114 or yocomc@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this statement. Other individuals who made key contributions to this testimony include Leslie V. Gordon (Assistant Director), Deirdre Gleeson Brown (Analyst-in-Charge), Muriel Brown, Helen Desaulniers, Melissa Duong, Julianne Flowers, Sandra George, Giselle C. Hicks, Drew Long, Perry Parsons, Russell Voth, and Jennifer Whitworth. Related GAO Reports Improper Payments: Actions and Guidance Could Help Address Issues and Inconsistencies in Estimation Processes. GAO-18-377. Washington, D.C.: May 31, 2018. Medicaid: CMS Should Take Steps to Mitigate Program Risks in Managed Care. GAO-18-291. Washington, D.C.: May 7, 2018. Medicaid: Opportunities for Improving Program Oversight. GAO-18-444T. Washington, D.C.: April 12, 2018. Medicaid Demonstrations: Evaluations Yielded Limited Results, Underscoring Need for Changes to Federal Policies and Procedures. GAO-18-220. Washington, D.C.: January 19, 2018. Medicaid: Further Action Needed to Expedite Use of National Data for Program Oversight. GAO-18-70. Washington, D.C.: December 8, 2017. Medicare and Medicaid: CMS Needs to Fully Align Its Antifraud Efforts with the Fraud Risk Framework. GAO-18-88. Washington, D.C.: December 5, 2017. Improper Payments: Additional Guidance Could Provide More Consistent Compliance Determinations and Reporting by Inspectors General. GAO-17-484. Washington, D.C.: May 31, 2017. Medicaid Demonstrations: Federal Action Needed to Improve Oversight of Spending. GAO-17-312. Washington, D.C.: April 3, 2017. Medicaid Program Integrity: CMS Should Build on Current Oversight Efforts by Further Enhancing Collaboration with States. GAO-17-277. Washington, D.C.: March 15, 2017. High-Risk Series: Progress on Many High-Risk Areas, While Substantial Efforts Needed on Others. GAO-17-317. Washington, D.C.: February 15, 2017. Medicaid: CMS Needs Better Data to Monitor the Provision of and Spending on Personal Care Services. GAO-17-169. Washington, D.C.: January 12, 2017. Medicaid: Program Oversight Hampered by Data Challenges, Underscoring Need for Continued Improvement. GAO-17-173. Washington, D.C.: January 6, 2017. Improper Payments: Strategy and Additional Actions Needed to Help Ensure Agencies Use the Do Not Pay Working System as Intended. GAO-17-15. Washington, D.C.: October 14, 2016. Medicaid Program Integrity: Improved Guidance Needed to Better Support Efforts to Screen Managed Care Providers. GAO-16-402. Washington, D.C.: April 22, 2016. Medicaid: Federal Guidance Needed to Address Concerns About Distribution of Supplemental Payments. GAO-16-108. Washington, D.C.: February 5, 2016. Medicaid: Additional Efforts Needed to Ensure that State Spending is Appropriately Matched with Federal Funds. GAO-16-53. Washington, D.C.: October 16, 2015. Medicaid: Service Utilization Patterns for Beneficiaries in Managed Care. GAO-15-481. Washington, D.C.: May 29, 2015. Medicaid: Additional Actions Needed to Help Improve Provider and Beneficiary Fraud Controls. GAO-15-313. Washington, D.C.: May 14, 2015. Medicaid: CMS Oversight of Provider Payments Is Hampered by Limited Data and Unclear Policy. GAO-15-322. Washington, D.C.: April 10, 2015. Medicaid Demonstrations: HHS’s Approval Process for Arkansas’s Medicaid Expansion Waiver Raises Cost Concerns. GAO-14-689R. Washington, D.C.: August 8, 2014. Medicaid Financing: States’ Increased Reliance on Funds from Health Care Providers and Local Governments Warrants Improved CMS Data Collection. GAO-14-627. Washington, D.C.: July 29, 2014. Medicaid Demonstration Waivers: Approval Process Raises Cost Concerns and Lacks Transparency. GAO-13-384. Washington, D.C.: June 25, 2013. Medicaid: More Transparency of and Accountability for Supplemental Payments Are Needed. GAO-13-48. Washington, D.C.: November 26, 2012. Medicaid Demonstration Waivers: Recent HHS Approvals Continue to Raise Cost and Oversight Concerns. GAO-08-87. Washington, D.C.: January 31, 2008. Medicaid and SCHIP: Recent HHS Approvals of Demonstration Waiver Projects Raise Concerns. GAO-02-817. Washington, D.C.: July 12, 2002. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Medicaid, a joint federal-state health care program overseen by CMS, is a significant component of federal and state budgets, with total estimated expenditures of $596 billion in fiscal year 2017. Medicaid allows significant flexibility for states to design and implement program innovations based on their unique needs. The resulting diversity of the program and its size, make the program particularly challenging to oversee at the federal level and also vulnerable to improper payments. In fiscal year 2017, estimated improper payments were $36.7 billion in Medicaid, up from $29.1 billion in fiscal year 2015. Further, the Medicaid program accounted for about 26 percent of the fiscal year 2017 government-wide improper payment estimate. This testimony focuses on the (1) major risks to the integrity of the Medicaid program, and (2) actions needed to manage these risks. This testimony draws on GAO's reports issued between November 2012 and May 2018 on the Medicaid program. What GAO Found GAO's work has identified three broad areas of risk in Medicaid that also contribute to overall growth in program spending, projected to exceed $900 billion in fiscal year 2025. 1) Improper payments , including payments made for services not actually provided. Regarding managed care payments, which were nearly half (or $280 billion) of Medicaid spending in fiscal year 2017, GAO has found that the full extent of program risk due to overpayments and unallowable costs is unknown. 2) Supplemental payments , which are payments made to providers—such as local government hospitals—that are in addition to regular, claims-based payments made to providers for specific services. These payments totaled more than $48 billion in fiscal year 2016 and in some cases have shifted expenditures from the states to the federal government. 3) Demonstrations , which allow states to test new approaches to coverage. Comprising about one-third of total Medicaid expenditures in fiscal year 2015, GAO has found that demonstrations have increased federal costs without providing results that can be used to inform policy decisions. GAO's work has recommended numerous actions to strengthen oversight and manage program risks. Improve data. The Centers for Medicare & Medicaid Services (CMS), which oversees Medicaid, needs to make sustained efforts to ensure Medicaid data are timely, complete, and comparable from all states, and useful for program oversight. Data are also needed for oversight of supplemental payments and ensuring that demonstrations are meeting their stated goals. Target fraud. CMS needs to conduct a fraud risk assessment for Medicaid, and design and implement a risk-based antifraud strategy for the program. Collaborate. There is a need for a collaborative approach to Medicaid oversight. State auditors have conducted evaluations that identified significant improper payments and outlined deficiencies in Medicaid processes that require resolution. What GAO Recommends As a part of this body of work, GAO has made 83 recommendations to address shortcomings in Medicaid oversight and suggested four matters for congressional consideration. The Department of Health and Human Services and CMS have generally agreed with these recommendations and have implemented 25 of them. GAO will continue to monitor implementation of the remaining recommendations.
gao_GAO-19-33
gao_GAO-19-33_0
Background Personal property refers to a wide variety of property that may include commonly used items such as computers, office equipment and furniture, and vehicles, as well as more specialized property specific to agencies such as medical equipment for VA and medical helicopters for the Army. See figure 1. The personal property exchange/sale authority allows agencies to replenish property that is not excess or surplus and that is still needed to meet the agency’s continuing mission. Agencies must meet several requirements, including: The property exchanged or sold is similar to the property acquired. Agencies can meet the similarity requirement in one of several ways. First, the property acquired is identical to the property replaced. Second, the acquired property and the replaced property fall within a single federal supply group of property. Third, both the acquired and the replaced property constitute parts or containers for similar parts. Fourth, the acquired and the replaced property are designed or constructed for the same purpose. For instance, ambulances and station wagons adapted for use as ambulances would be considered similar. The property exchanged or sold was not acquired for the principal purpose of later exchanging or selling it using the authority. For example, an agency cannot purchase a costly piece of equipment for the sole reason that it will deliver a higher value when sold using the authority. Proceeds from the sale can only be put toward the purchase of replacement property and cannot be spent on services. In other words, an agency can use proceeds from the sale of a vehicle to purchase a new vehicle, but it cannot use proceeds to hire a mechanic to repair an existing vehicle. In addition, GSA regulations, except as otherwise authorized by law, require that proceeds from sales be available during the same fiscal year the property was sold or the following fiscal year for replacement property. For an item sold in fiscal year 2018, an agency has the rest of fiscal year 2018 as well as fiscal year 2019 to purchase a replacement item. If agencies do not spend these funds by the end of fiscal year 2019, monies are to be deposited in the U.S. Treasury. Finally, agencies are prohibited from using the authority to replace certain types of property (i.e., hand tools and clothing). However, agencies may request a waiver from GSA to sell these prohibited items or to extend the time frame to purchase replacement property. Agencies may choose between two transaction methods to replace property—the exchange (trade-in) or sale method, but must determine which method provides the greater return to the government, including administrative and overhead expenses. A typical exchange occurs when the original manufacturer delivers a replacement item to the agency and removes the item being replaced. The manufacturer applies a trade-in credit (an allowance) for the purchase of a replacement item. If the sale method is used, the agency receives the sale proceeds for the sale of the non-excess items (needed to meet mission requirements) and applies those proceeds to the purchase of the replacement personal property. See figure 2. In conducting a sale, agencies are to follow a process similar to the disposal process for excess property. When an agency disposes of excess property, it makes the item available to other federal agencies and state agencies by posting it in GSAXcess—GSA’s website for reporting, searching, and selecting excess property. The disposal process generally consists of four sequential stages in which personal property may be transferred to another agency or eligible recipient, donated, sold, or abandoned or destroyed. Similarly, agencies may use GSAXcess to facilitate the replacement of property under the exchange/sale authority. However, unlike the disposal process for excess property that may be offered at no cost, if another federal agency or state agency needs the property, the agency is to pay no more than the fair market value for the item or a negotiated fixed price, respectively. Otherwise, the property may be listed for sale to the general public at approved sales centers, such as GSA AuctionsSM, or through other approved methods, such as live auctions or Internet sales. After the sale closes, the agency receives the proceeds to apply toward the purchase of a similar item. Agencies are required to submit a summary report to GSA at the end of each fiscal year on the type, the quantity, the exchange allowances or sale proceeds, and the original acquisition cost of items for both exchange and sale transactions. Agencies that made no transactions during a fiscal year must submit a report stating that they made no transactions. Ultimately, agencies decide whether to use the exchange/sale authority to replace property in their inventory. In managing property, federal law requires agencies to maintain adequate inventory controls and accountability systems as well as assess the extent to which the agency’s mission depends on the property. A Few Agencies Carried Out Most Transactions, Which Involved Selling Billions of Dollars in Property GSA Reported About 60 Percent of Proceeds across the Federal Government According to GSA’s annual summary data, 27 agencies reported using the exchange/sale authority and received a total of about $3.1 billion in exchange allowances or sale proceeds from fiscal year 2013 through fiscal year 2017. While many agencies used the authority, a few agencies, particularly GSA, together accounted for about 90 percent of all allowances and proceeds. Specifically, 5 of 27 agencies reported nearly all exchange allowances and sale proceeds. GSA accounted for about $1.9 billion of about $3.1 billion (or about 60 percent) of reported allowances and proceeds across the federal government. Four other agencies—the Departments of Homeland Security, Agriculture, Defense, and the Interior—accounted for about $934 million (or about 30 percent) of the total. The other 22 agencies using the authority reported about $332 million (or about 11 percent) in exchange allowances or sales proceeds over the 5-year period. See figure 3. Finally, agencies reported using the sale method more than the exchange method. Sales by agencies accounted for about $2.9 billion (or about 91 percent), while use of the exchange method accounted for about $275 million (or about 9 percent) of total transactions reported, primarily due to GSA’s and DOD’s reporting more use of the sale method over the exchange method. While some agencies reported hundreds of millions of dollars in exchange allowances and sale proceeds, the data show that 10 federal agencies— including the Department of Labor, Office of Personnel Management, and the Social Security Administration—reported relatively few transactions, which totaled less than $100,000 in exchange allowances and sales proceeds. GSA OGP officials consider the agency-reported data to provide a representative picture of the overall exchange/sale transactions occurring across the federal government. GSA OGP officials rely on the agencies to ensure the accuracy and completeness of the exchange/sale information. According to GSA OGP officials, because GSA does not have authority to compel the agencies to report or address quality issues, it does not look at record level data from the agency to determine the data’s accuracy and does not have a way of verifying if exchange data are accurate and complete. Nonetheless, GSA officials said they take steps to ensure the data are reliable and complete. For example, GSA OGP officials said they review the data for any obvious inaccuracies and follow up with the reporting agency to correct the inaccuracy. In addition, according to GSA OGP officials, they report the sales portion of most agencies’ exchange/sale transactions for any sales that were conducted by GSA and ask agencies to verify the data before finalizing it in the summary report. Agencies Reported Selling High-Value Items, Primarily Vehicles While agencies exchanged and sold a wide variety of items, GSA’s annual summary data show that high-value items, primarily vehicles, accounted for the vast majority of allowances and proceeds from fiscal year 2013 through fiscal year 2017. Specifically, vehicle sales across the federal government accounted for about $2.6 billion of $3.1 billion (or about 84 percent) in total proceeds, of which GSA’s Fleet program accounted for about 71 percent of that total. According to GSA Fleet program officials, the authority allows GSA to continuously update its fleet of over 214,000 vehicles while keeping lease payments low for its 75 customer agencies. The program sells an average of about 36,000 vehicles each year, bringing in about $370 million in sales proceeds annually. In fiscal year 2017, the program received almost $300 million in proceeds from vehicle sales and spent about $776 million acquiring new vehicles. Three agencies—the Departments of Agriculture, Homeland Security, and the Interior—each reported over $100 million in proceeds from vehicle sales. In addition to vehicles, agencies reported exchanging and selling other types of high-value items. For example, DOD reported using the authority to sell or exchange helicopters. According to the Army Aviation Program Executive Office, the Army continues to divest and plans to replace up to 800 Black Hawk helicopters from 2014 to 2025, each having an average value of about $1.5 million. See figure 4. DOD reported about $150 million in exchange allowances and sale proceeds by using the authority to replenish aircraft, and as of January 2018 Army Aviation had purchased five Black Hawk helicopters. Other DOD agencies—the Naval Air Systems Command and the Air Force Life Cycle Management Center—are using the authority for exchanging aircraft engines and parts containing rare earth metals under a reclamation and propulsion material exchange program. In addition to high-value items, agencies reported selling a wide variety of other items, including missiles, office equipment, lumber, and packing supplies. One of our selected agencies, VA, predominately used the authority to exchange medical equipment. See figure 4. However, we did not find data for VA to be sufficiently reliable to report separately. Based on our interviews with VA medical centers we found that reported data did not reflect actual exchange/sale transactions, which we discuss later in this report. However, we have included VA data in the reported $332 million for “Other federal agencies” in figure 3. Selected Agencies Expressed Confusion About How to Use the Authority or Monitor Exchange/Sale Activities VA Did Not Understand How to Use the Exchange/Sale Authority GSA regulations for the personal property exchange/sale authority set forth several conditions for using the authority, including that the property exchanged or sold is not excess or surplus and that agencies report information on their exchange/sale transactions to GSA on an annual basis. Federal internal control standards state that management should externally communicate the necessary quality information to achieve the entity’s objectives. However, the agencies in our review had different levels of understanding about the authority, which affected their experiences for how they used the authority and outcomes. For example, VA officials said they misunderstood key aspects of the exchange/sale authority, resulting in inefficiencies and data inaccuracies, as described below: Process for selling property: Officials from all three selected VA medical centers said they did not understand the sequence of events in selling property using the sale method, a situation that led to VA’s using a potentially less economical method to acquire new equipment. For example, officials at two selected VA medical centers told us that they believed that they had to sell their medical equipment prior to acquiring replacement equipment. Officials at one medical center said this sequence of events makes it difficult to use the sales method of the exchange/sale authority because VA medical centers must have medical equipment, such as x-ray machines, readily available and fully operational for veterans at all times. However, GSA OGP officials stated that replacement property can be purchased prior to the sale of property. In addition, officials at a VA medical center reported they had limited storage, making it difficult to buy replacement equipment and store it until VA can sell the equipment. As a result, a VA medical center official stated that they instead used the exchange method because it provided a seamless replacement of equipment to prevent any break in availability of medical equipment. While the exchange method is a viable approach, in this case, the sales method could have delivered a higher monetary return. In addition, by using the sale method, VA could potentially have replaced equipment more efficiently than replacing the full cost of the item with the agency’s appropriation. A VA headquarters official was also unclear about how to use the proceeds from sales. This official was unclear whether the sales proceeds could be used from any type of medical equipment in a particular supply category, such as a scalpel, toward the replacement of another item in that same classification, such as a wheelchair, or whether the items had to be identical or serve a similar purpose. Data Reporting: Officials at two selected VA medical centers did not clearly understand the annual summary data reporting process. These officials stated that they found GSA’s reporting template confusing because it provides minimal direction to the user and does not clearly define some data-reporting elements. The template includes a space for reporting “exchanged/sold;” however, officials at one medical center were unaware that “sold” referred specifically to exchange/sale transactions only and not to other transactions referred to as sales, such as surplus property sales; according to medical center officials, this medical center reported about 1,000 misclassified sales in GSA’s annual summary data. Exchange/sale versus disposal: According to VA officials, they or others involved in personal property management did not fully understand the distinction between the process for acquiring replacement property under the exchange/sale authority and the process for declaring property as excess. Officials within all three selected VA medical centers misunderstood the difference between the two processes, possibly because both processes use GSAXcess to sell property under the exchange/sale authority or to report property as excess for disposal. As a result, one VA medical center mistakenly reported excess disposals as exchange/sale in the GSA OGP annual summary data. In addition, two facilities disposed of some still needed property instead of conducting sales under the exchange/sale authority. A VA headquarters official acknowledged that property managers in charge of implementing the exchange/sale authority at medical centers may be confusing these two processes or may be unaware that the exchange/sale authority exists. Similarly, officials from the Air Force and Navy said they or others involved in personal property management did not always understand the difference between these two processes. An Air Force official stated that DOD’s policies do not clearly distinguish the exchange/sale process from the disposal process and do not consistently define terms, such as “excess” and “non-excess” property, that align with GSA’s regulations. In retrospect, Air Force officials stated that they disposed of property that could have been replaced through the exchange/sale authority. Generally, disposal results in (a) sales proceeds being returned to the U.S. Treasury rather than retained by the agency and (b) services possibly having to use their appropriation for replacement property, rather than working directly with the vendor to obtain a replacement at a reduced cost. We have previously reported on DOD’s disposing of $855 million in excess items for which they will likely have a continuing need. Conversely, based on our interviews and review of their policies, records, and transaction data, two program offices within the Army and GSA— Army Aviation and GSA Fleet—appeared to understand how to use the exchange/sale process. We found that these offices may have a greater level of understanding for a few key reasons: Narrow scope: Both programs are designed around replacing one type of item—helicopters for helicopters or vehicles for vehicles. When items are not so directly interchangeable, determining whether or not an item sold and replaced or exchanged are “similar” can be challenging. Because the Army Aviation and the GSA Fleet programs focus on one type of item, the determination of what constitutes similar property under the GSA regulation is not a challenge. Established programs with frequent sales: The Army Aviation and the GSA Fleet programs have sold hundreds of aircraft and tens of thousands of vehicles over the past several years. They have invested resources into developing an exchange/sale process. Conversely, programs that may sell or exchange an item or two a year—even very expensive items, such as medical equipment—may not have the same opportunities to develop processes and guidance through repeated sales or exchanges. High-value items: Similarly, both the Army Aviation and the GSA Fleet programs sell high-value items. Thus, investing resources in an exchange/sale process makes sense, as programs benefit from the sales and have a process to guide and track these high-value items. For an agency like VA, which disposes of some low-value items, there may not be the same motivation to develop a standard process. GSA OGP officials emphasized that high-value items, such as helicopters and vehicles, are best suited for using the exchange/sale authority. GSA Has Not Clarified Aspects of Using Exchange/Sale Authority for Agencies GSA OGP officials stated that they recognize some agencies, such as VA, may experience confusion using the authority, that the regulations are misunderstood by agencies, and that aspects of the authority need to be clarified. According to these officials, GSA attempted to amend the regulations in 2015 to address key areas of confusion, including: restricting the definition of similar to ensure that items replaced are clearly similar. GSA wanted to change the federal supply category criteria to make agencies replace items at the more specific four digit level rather than the broader two digit level. As an illustration, this change would help clarify the confusion VA reported about whether a scalpel and a wheel chair qualify as similar items. clarifying the process for selling property; specifically, clarifying that agencies can purchase replacement property prior to the sale of property that no longer adequately performs its task. However, GSA OGP officials stated that they did not complete the rulemaking process in order to give the incoming administration an opportunity to review and approve any revisions. Since the change in administration, GSA officials said they have been focused on evaluating the continued need and relevance of all of their regulations as part of the administration’s plan to conduct regulatory reviews. Nonetheless, GSA OGP officials said they plan to address these areas of confusion by amending the regulations. Specifically, they plan to clarify the definition of similar property and the difference between excess and non-excess property, among other changes. However, officials estimate the rulemaking will likely not be finalized for at least 2 years. As a result, the extent to which the rulemaking process will result in clarifying language is unknown. Although GSA anticipates initiating a rulemaking to amend regulations, which could make the definition of “similar” more restrictive, GSA OGP officials told us that clarifying the issues agencies found confusing would not necessarily require a rulemaking. They highlighted other actions they are taking to promote the use of the authority, inform agencies of the requirements, and train agencies on using the authority. For example, they conduct outreach by making presentations at national conferences (i.e., FedFleet), meet with representatives from the National Property Management Association, and hold small group discussions with program managers specializing in certain high-value items, such as aircraft. GSA’s presentations aim to educate agencies on what the authority is, the conditions and requirements of the authority, and when to use the authority. According to GSA OGP officials, as a result of their outreach, they have seen immense growth in exchange/sale transactions among the aviation community. GSA has also issued bulletins to help dispel misunderstandings related to using the exchange/sale authority. For example, GSA issued a bulletin in 2010 to federal agencies to remind them to submit annual reports on exchange/sale transactions. This bulletin contained information on the reporting requirements, frequently asked questions, and points of contact for agencies to reach out to with additional questions. In summer 2018, GSA OGP officials drafted a new bulletin to further address financial aspects of the exchange/sale authority and expect to issue it in December 2018. This bulletin details why agencies should use the authority and directs agencies to develop policies for using the authority and to consult with the Chief Financial Officer of the agency to obtain more information. According to these officials, an additional bulletin would take 3 to 4 months to develop and issue. However, neither GSA’s outreach nor its draft bulletin addresses existing confusion regarding the sales process or data reporting, or distinguishes the exchange/sale process from the disposal process. For example, GSA’s outreach, such as the FedFleet presentation, generally describes the authority and discusses provisions of using the authority but does not address issues agency officials told us they found confusing. The presentation tells agencies that they can sell property under the authority but does not go into the mechanics of how to sell property. By making presentations like these to address areas agencies found confusing, GSA would have an opportunity to help clarify these issues and encourage agencies to use the authority more. Moreover, GSA OGP officials told us that they believe that a lack of knowledge of the authority is a reason why some agencies do not use it more. As we reported earlier, 10 of the 27 federal agencies that reported transactions had few exchange/sale transactions over the past 5 years. According to a VA official, if VA medical centers better understood how to use the authority, they could see a significant increase in use throughout the agency. Furthermore, if GSA provided clearer information on using the authority, the 10 agencies that we found that used the authority infrequently may increase their use. Additionally, GSA’s draft bulletin on financial issues does not address the logistical issues agencies found confusing, such as how to sell property using the exchange/sale authority. The bulletin addresses accounting procedures agencies should follow when conducting transactions but does not describe how agencies are to conduct these transactions. Until GSA takes action to address confusion, agencies may continue to misunderstand and not use the exchange/sale authority. If agencies continue to misunderstand aspects of the exchange/sale authority, they may not take full advantage of the authority, thereby missing opportunities to be more effective stewards of government property and replenish property more efficiently. GSA and VA Did Not Monitor Exchange/Sale Activities Agencies are responsible for managing their own personal property, including monitoring their exchange/sale activities. Federal internal control standards call on managers to establish and operate monitoring activities to monitor the internal control system and evaluate the results. Monitoring involves regular management and supervisory activities, comparisons, reconciliations, and other routine actions. We found that the Army monitored its exchange/sale activities, as outlined in its policies. The Army’s policy delegated responsibility to the Army’s Deputy Chief of Staff (Logistics) to monitor and approve Army programs seeking to use the authority. Our review of Army’s policy found that multiple Army offices monitor financial, logistical, legal, and procurement functional areas as they reviewed and communicated on the eligibility of exchange/sale transactions. The policy also allows program and inventory managers to use the authority for high-value items, requires contracting officers and attorneys to review the transactions, and uses a management checklist for transactions. Consistent with policy, the Army’s Deputy Chief of Staff, in conjunction with offices within DOD, reviewed and approved requests from Army Aviation to use the exchange/sale authority to sell Black Hawk helicopters and apply proceeds to replacement helicopters. The Army official said that the office continues to monitor exchange/sale transactions in collaboration with the Army Aviation program to manage the exchange and sale of their personal property that includes Black Hawk helicopters. Unlike the Army, GSA OAS did not monitor its internal exchange/sale activities. In 2009, GSA’s internal policy established a position responsible for ensuring compliance with government-wide, personal property requirements. However, GSA officials stated that the position was never staffed and later subsumed into GSA OAS when the office was established in 2011 to manage personal property, including exchange/sale activities, within the agency. Since that time, GSA OAS officials said that they have not monitored these activities because senior management did not prioritize personal property, including exchange/sale transactions. For example, management did not clarify GSA OAS’s responsibilities nor did it define the scope of its authority for monitoring exchange/sale activities. As a result, GSA OAS officials said they have not been involved with any exchange/sale activities within the agency, and besides GSA Fleet, they do not know the extent to which other internal offices are using (or should be using) the authority. According to GSA OAS officials, they have recently focused on an effort to rebuild an internal personal-property management program that will take several years to develop given the current staff of two. As part of this effort, GSA OAS revised the policy for internal personal property management in 2018 and is drafting a standard operating procedure that is expected to provide additional clarification for monitoring and conducting exchange/sale activities within GSA. According to GSA officials: the 2018 policy provides relevant updates and more details that distinguish between (a) the exchange/sale authority for the exchange and sale of non-excess, non-surplus personal property and (b) the disposal authority with a focus on the disposal of excess personal property. the draft standard operating procedure is to provide procedures for all internal GSA offices to follow when using the authority. This standard operating procedure establishes a position to, among other things, help internal offices conduct and report exchange/sale transactions. GSA OAS officials referred to this procedure as a work-in-progress and were uncertain when it would be finalized. However, GSA OAS officials said that they do not know whether this policy revision will allow them to monitor exchange/sale activities for two reasons. First, GSA OAS is unclear about the scope of its authority, such as whether the GSA Fleet program falls under its exchange/sale purview. GSA Fleet program officials said that they are not opposed to having GSA OAS monitor their program in the future. Second, this procedure will not be formally approved or coordinated throughout GSA, a situation that means there may not be consensus among all GSA offices as to GSA OAS’s responsibilities and scope of authority. As a result, the revision of the policy and completion of the procedure may not be enough to ensure compliance with exchange/sale requirements. In the absence of clear responsibilities and scope of its authority, GSA OAS may not be able to monitor exchange/sale activities or provide clear information and direction to other offices within GSA. Similar to GSA, VA conducted limited monitoring of its exchange/sale activities. VA policy states that the Deputy Assistant Secretary for Acquisition and Logistics has the departmental-wide responsibility for personal property inventory management, utilization, and disposition as well as to monitor VA logistics programs and policies. Within VA’s Veterans Health Administration (VHA), the Office of Procurement and Logistics assigns logistics officers at VHA Regional Offices with monitoring responsibilities of medical centers to ensure compliance with VA and VHA policies. However, we found that the three VHA Regional Offices conducted limited monitoring of 23 medical centers under their purview. According to the officials we contacted, they conducted a cursory review of end-of-year data from medical centers before the data were submitted through VHA to GSA for the annual summary report. According to officials at one Regional Office, they did not focus on monitoring exchange/sale transactions beyond a cursory review to see that property fell within the medical or laboratory equipment supply categories. As previously mentioned, we found that reported data did not reflect actual exchange/sale transactions. Specifically, we found that none of the sale transactions reported in 2016 as exchange/sale transactions by a selected medical center in this region was correct. Instead, these transactions were sales of surplus property. According to officials at another Regional Office, they have no reason to review exchange/sale transactions in a more robust manner because end-of-year reporting presented no problems in the past that would warrant a more standardized approach. However, for the one selected medical center in this region, we found several errors in reporting end-of-year data from 2013 through 2017. Specifically, we found that nearly all reported exchanges were actually sales of surplus property, a reported exchange in 2017 was actually a transfer to another medical center, and despite reporting no transactions in 2016, we identified an exchange valued at $500,000. According to officials from a third Regional Office, they monitored various aspects of VA’s personal property program—inventories and disposals, but not exchange/sale transactions. During our review, we found that one selected medical center under their purview reported about 1,000 sale transactions to GSA, but none was correct. Instead of sales of needed (non-excess, non-surplus) property, they were actually sales of surplus property. Regional officials are aware of this error and have added four new questions about exchange/sale transactions to the checklist used for their annual quality-control reviews. They said they do not know whether other Regional Offices perform similar reviews. An official in VA’s Office of Acquisition and Logistics acknowledged that these findings are likely not uncommon because the office has not developed or communicated the management activities necessary for Regional Offices to consistently monitor medical centers’ exchange/sale transactions. The lack of communication on monitoring procedures was corroborated by two Regional Offices. An official with the Office of Acquisition and Logistics explained that the office promulgates policy and that VHA’s Office of Procurement and Logistics helps ensure policy is followed, but the absence of monitoring stems, in part, from these two offices’ not collaborating or communicating the activities Regional Offices are to conduct. VHA Regional Offices monitor medical centers through annual quality-control reviews, but the reviews did not have an exchange/sale component. Furthermore, VA internally reviews a small sample of the VHA’s annual quality-control reviews each year. From a Regional Office perspective, officials told us they prioritized other activities, such as monitoring inventories or disposal of equipment, over exchange/sale activities. The VA office has also not communicated with VA medical centers on how to effectively use the authority to support their medical equipment replacement needs or the benefits associated with the authority. For example, the VA office has not provided specific guidance beyond issuing personal property policies for how to conduct and monitor exchange/sale transactions. VA officials are taking steps to improve communication to those involved in exchange/sale transactions throughout the agency—those monitoring transactions and those initiating transactions. For example, officials within the Office of Acquisition and Logistics stated that they plan to clarify the use of the exchange/sale authority within the agency’s policies for personal property disposal. This clarification will be in the form of a notice (an incremental policy change) or as part of a planned rewrite of personal property policies. However, it is uncertain whether the information will have a level of detail to be useful for medical centers to understand the requirements for using the exchange/sale authority or will delineate how the exchange/sale process differs from the disposal process. Adding to this is the uncertainty about the time frame for finalizing and communicating such information to medical centers. Furthermore, VA officials said the policy changes alone will not be sufficient and assistance from VHA will be necessary to ensure Regional Offices understand their monitoring roles and responsibilities. A VHA official acknowledged the need to work with Regional Offices to augment the annual quality-control review checklists with an exchange/sale component, but it is unclear if and when such an update will take place. Until VA and VHA work together to develop a detailed policy for monitoring and establish time frames with milestones for communicating information, they cannot be assured that 172 medical centers and 18 Regional Offices understand the exchange/sale authority, how to use it, and how to monitor end-of-year reporting data. Conclusions By using the exchange/sale authority, agencies have an opportunity to be good stewards of government property by efficiently replacing needed property, including high-value items, that serves critical and continuing requirements to meet agency missions. However, unfamiliarity with the exchange/sale authority and confusion surrounding the authority may lead to decisions that may not be in the government’s best interest. Although GSA OGP officials acknowledge the need to amend the regulations to address areas that require rulemaking, delay in taking action to address areas of confusion that currently exist but do not require rulemaking will continue to lead to misinterpretation or misunderstanding about the authority. Moreover, until GSA specifies GSA OAS’s responsibilities and defines the scope of its authority, it will continue a long-term pattern of not monitoring GSA’s exchange/sale activities. Finally, until VA develops and communicates the necessary information to help Regional Offices and medical centers with their exchange/sale responsibilities, it will not have an assurance that all VA medical centers are reporting transactions accurately or effectively using the exchange/sale authority. Recommendations for Executive Action We are making the following two recommendations to GSA and one recommendation to VA. The GSA’s Associate Administrator for the Office of Government-wide Policy should take action to address specific areas of federal agency confusion with the exchange/sale authority, areas such as the process for selling property, reporting data, and distinguishing the exchange/sale process from the disposal process. Such actions could include issuing bulletins or conducting expanded outreach and, as necessary, issuing regulations. (Recommendation 1) The Administrator of General Services should take steps to improve agency-wide monitoring of exchange/sale activities within GSA by specifying the Office of Administrative Services’ responsibilities and by defining the scope of its authority. (Recommendation 2) The VA’s Deputy Assistant Secretary of Acquisition and Logistics, in collaboration with VHA’s Office of Procurement and Logistics, should revise VA’s policy to include details on the exchange/sale authority, particularly those related to monitoring by Regional Offices and use of the authority for medical centers, and establish time frames with milestones for communicating such information. (Recommendation 3) Agency Comments We provided a draft of this report to GSA, DOD, and VA for comment. All three agencies agreed with the findings. GSA and VA also agreed with the recommendations for their agencies. DOD provided a technical comment to the report in an email; we incorporated the technical suggestion. GSA agreed with our recommendations and stated that it has already begun to increase understanding and appropriate use of the exchange/sale authority within GSA and across the federal government. GSA is finalizing a plan to address the recommendations. GSA’s written response is reprinted in appendix II. VA agreed with our recommendation to revise its policy to include details on the exchange/sale authority. VA stated that the Office of Acquisition and Logistics, in conjunction with the VHA Procurement and Logistics Office, has produced two draft memorandums to amend policy related to the exchange/sale authority as well as the utilization and disposal of personal property. The agency plans to promulgate the new policy by December 2018. VA’s written response is reprinted in appendix III. We are sending copies of this report to the appropriate congressional committees, the Administrator of General Services, the Secretary of Defense, and the Secretary of Veterans Affairs. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-2834 or rectanusl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology Our objectives were to (1) describe what is known about personal property exchange/sale transactions, as reported by federal agencies from fiscal years 2013 through 2017, and (2) examine selected agencies’ experiences using the personal property exchange/sale authority and monitoring such activities. To address both objectives, we reviewed applicable federal statutes and regulations pertaining to personal property management and the exchange/sale authority, our prior work, and reports by federal agencies’ offices of inspectors general on personal property management issues. To understand General Services Administration’s (GSA) role and responsibilities for personal property management in support of exchange/sale activities across the federal government, we reviewed GSA’s personal property management structure, policies, bulletins, briefings, and training materials. To describe what is known about the personal property exchange/sale transactions, we analyzed annual exchange/sale summary data, as reported to GSA’s Office of Government-wide Policy (GSA OGP) from federal agencies from fiscal year 2013 through fiscal year 2017. These data identify the agency involved in the transactions, the transaction method, and the type and value of the property. These data are the only federal government-wide data available on exchange/sale transactions. Accordingly, we analyzed these summary data to characterize transactions over a 5-year time frame, by agency, by type of transaction (exchange or sale), by type of personal property using personal property categories, and by amount of exchange allowances and sale proceeds. We assessed the reliability of these data from a government-wide perspective and for selected agencies. From a government-wide perspective, we reviewed GSA’s electronic template provided to federal agencies for reporting data, viewed a training video used to help agencies report data to GSA, and reviewed the users’ guide and other materials related to GSA’s personal property reporting tool. In addition, we interviewed GSA OGP officials regarding their data processes—such as data collection, submission, reconciliation, verification, and compilation of annual exchange/sale summary reports—to understand the steps GSA OGP takes to determine the accuracy, consistency, and completeness of data. We did not independently verify all the exchange and sales data that was provided to us because of the large quantity of detailed data associated with each agency and because some of the data were not within the scope of our selected agencies and personal property categories. However, we determined that GSA’s government-wide summary data was sufficiently reliable for our purposes of describing the agencies that use the authority, the general types of property they acquire, and the relative order of magnitude of exchange allowances and sales proceeds. For sales conducted through GSA sales centers, GSA reports summary information on behalf of most agencies. GSA officials told us all exchange transactions are self-reported by agencies. GSA does not ensure the accuracy of this information beyond a review for obvious errors. However, because sales account for about 91 percent of the dollar value of all transactions, we believe that the total value of transactions across the federal government is sufficiently reliable for our purposes of describing exchange/sale activity. To assess the reliability of GSA and other selected agencies’ summary data, we compared annual exchange/sale summary data collected by GSA OGP with detailed GSAAuctionsSM sales data associated with the exchange/sale authority collected by GSA’s Office of Personal Property Management. We looked to see if aggregated sales totals matched, identified similarities and gaps, and observed individual agency and government-wide trends for using the exchange/sale authority. We found data reported by GSA’s Office of Fleet Management (GSA Fleet) and the Army’s Program Executive Office for Aviation (Army Aviation) to be reliable. However, we found reliability issues with data reported by the Department of Veterans Affairs (VA). As a result of our interviews with selected facilities, we found that some reported sale and exchange data from VA did not represent actual exchange/sale transactions. Accordingly, we determined that VA data were not reliable to analyze independently. We did include these data in the total for the federal government given that they accounted for about 1 percent of that total. To examine selected agencies’ experiences using the exchange/sale authority and monitoring such activities, we selected three agencies— GSA, the Department of the Army within the Department of Defense (DOD), and the VA—based on various characteristics, such as the values of the agencies’ exchange allowances and sale proceeds; the quantity of items exchanged and sold; and selected three different types of personal property categories—vehicles, aircraft, and medical equipment—for which the exchange/sale authority was used over the 5-year time period. GSA: We selected GSA because it reported a high-value of exchange/sale transactions. Within GSA, two offices have key roles in the internal use of the exchange/sale authority. First, through GSA Fleet, GSA manages the government-wide motor-pool program (the largest user of the exchange/sale authority) that acquires vehicles and then leases them to other federal agencies. Second, GSA’s Office of Administrative Services (GSA OAS) is the office responsible for performing personal-property management functions, such as developing policy and procedures, internal to the agency. Army: We selected the Army because it reported a relatively low- number of high-value items. In particular, Army Aviation accounted for the majority of high-value aviation-related exchange/sale transactions within DOD. During the course of our review, we also attended a joint GSA-DOD presentation focused on major end items that brought together GSA, Army, Navy, and Air Force officials to discuss their experiences using the exchange/sale authority. VA: We selected VA because it reported a high-number of low-value items sold or exchanged. For in-depth interviews, we selected three medical centers (Long Beach, California; Cincinnati, Ohio; and Portland, Oregon) that reported using the authority for the acquisition of medical equipment and the three Veterans Integrated Service Networks (Regional Offices) responsible for monitoring these medical centers. See table 1 below. At all of these agencies, we reviewed exchange/sale transactions to understand agencies’ experiences in using the authority, personal property policies and program, financial documents applicable to exchange/sales, and applicable Standards for Internal Control in the Federal Government and GSA’s regulations. We also reviewed relevant sections of Principles of Federal Appropriations Law to understand decisions on using the exchange/sale authority for acquiring personal property. In addition, we examined agencies’ monitoring of exchange/sale transactions in the context of internal control standards. We interviewed officials from each of our selected agencies responsible for using exchange/sale authority and implementing processes to manage and monitor personal property. We interviewed GSA Fleet officials and visited Army Aviation officials in Huntsville, Alabama. During these interviews, GSA and Army Aviation officials walked through materials and explained their exchange/sale processes by using actual sample transactional information. We examined personal property documentation associated with personal property that had been either been exchanged or sold. For VA, we selected 3 of 172 VA medical centers to understand how these medical centers implemented their personal property exchange/sale processes and procedures. We selected one site based on its high number of exchange/sale transactions of medical equipment and its close geographic proximity to one of our field offices. The other two sites were chosen based on a high and low number of exchange/sale transactions of medical equipment. At the VA locations, we interviewed medical center officials responsible for supply chain management as well as Regional Office officials responsible for oversight of those selected medical centers and the exchange/sale management activities. During these interviews, we discussed selected agency officials’ understanding and use of the exchange/sale authority, reviewed data and documentation, addressed what officials did to implement processes for their exchange/sale programs, identified challenges, and took photographs at one location of selected personal property that was exchanged or sold. Information we obtained from the three selected agencies is not generalizable to all federal agencies but provides illustrative examples in how agencies have used the authority. We conducted this performance audit from August 2017 to November 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on the audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the General Services Administration Appendix III: Comments from the Department of Veterans Affairs Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, the following individuals made important contributions to this report: Nancy Lueke (Assistant Director); Steve Martinez (Analyst-in-Charge); Aisha Cabrer; SaraAnn Moessbauer; Malika Rice; Amy Rosewarne; Jerry Sandau; Travis Schwartz; and Crystal Wesco.
Why GAO Did This Study According to the U.S. Treasury, the government owns about $1.3 trillion in “personal property” such as computers, furniture, and vehicles. Federal law authorizes agencies to exchange or sell personal property and retain the allowances or proceeds for replacing similar needed property. These are called “exchange/sale” transactions. GSA is responsible for issuing exchange/sale regulations and guiding agencies on the use of the authority. GAO was asked to review agencies' use of the exchange/sale authority. This report (1) describes what is known about personal property exchange/sale transactions from fiscal year 2013 through fiscal year 2017 and (2) examines selected agencies' experiences using the exchange/sale authority and monitoring such activities. GAO analyzed multi-year data compiled by GSA OGP and found the data to be sufficiently reliable. GAO selected three agencies—GSA, the Army, and VA—based on the type, quantity, and value of personal property exchanged and sold; reviewed agencies' personal property policies; examined agencies' monitoring of exchange/sale activities; and interviewed their officials about personal property management. What GAO Found According to data compiled by the General Services Administration's Office of Government-wide Policy (GSA OGP), 27 agencies executed exchange/sale transactions, governed by statute and GSA regulations, to exchange (trade-in) or sell personal property from fiscal year 2013 through fiscal year 2017. The 27 agencies reported transactions totaling about $3.1 billion. Vehicle sales accounted for $2.6 billion (about 84 percent) of that total. GAO found that GSA officials who procure vehicles for federal agencies and Army officials who purchase helicopters appeared to understand the exchange/sale process and used it frequently. However, Department of Veterans Affairs (VA) officials expressed confusion about key aspects of the authority. For example, officials were unclear about how to sell property; this lack of clarity led to missed opportunities to use sale proceeds for replacing property. GSA OGP officials who guide agencies on the use of the authority acknowledged that the exchange/sale regulations can be confusing but GSA's plan to amend them is at least 2 years away. Because GSA does not plan to address this confusion in the near term through other means such as bulletins or outreach, agencies' misunderstanding of the authority could lead to additional missed opportunities to be effective stewards of government funds. Regarding monitoring of exchange and sale activities, GAO found that the Army monitored activities consistent with its policy. However, GSA and VA performed limited monitoring because: GSA had not clarified its responsibilities or defined the scope of its authority for monitoring internal GSA exchanges and sales, and VA did not have a detailed policy for monitoring and had not communicated information about monitoring to pertinent employees. Until GSA clarifies its responsibilities and the scope of its authority and VA revises its policy with pertinent details and communicates this information to staff members, neither agency will be positioned to sufficiently monitor exchange/sale activities. What GAO Recommends GAO is recommending that GSA OGP address agency confusion about the exchange/sale authority and that GSA clarify its responsibilities and the scope of its authority. GAO is also recommending VA revise its policy to address monitoring and communicate the revision to staff. Both agencies agreed with the recommendations.
gao_GAO-19-20
gao_GAO-19-20_0
Background Although less visible than other transportation modes and not as vast as they are, inland waterways allow shippers to transport goods, particularly bulk commodities, in a relatively cost-effective and environmentally friendly method between ports all along the waterways, and to coastal ports for transportation to international markets. For example, in a report prepared for the National Waterways Foundation, the Texas A&M Transportation Institute found that, for every gallon of fuel burned, 647 tons of cargo can be carried 1 mile by barge, but only 477 tons by train or 145 tons by truck. Additionally, if cargo transported on inland waterways each year were to be moved by truck, it would take an additional tens of millions of truck trips to carry that cargo—more than doubling the number of trucks per day, per lane on a typical rural interstate. Most of the goods moved on the inland and intracoastal waterways are bulk commodities, including coal; petroleum products; chemicals; aggregate construction materials such as sand, gravel and stone; as well as grain, soybeans, and other agricultural products. Approximately 12,000 miles of inland and intracoastal waterways and channels in the United States are commercially navigable and approximately 11,000 miles make up the fuel-taxed portion of the system, shown in figure 1. The remaining approximately 1,000 miles of inland and intracoastal waterways and channels are not part of the taxable waterways and contain very few significant lock and dam structures. Some commercial waterways users, especially those on the Upper Mississippi and Ohio Rivers, may never leave the taxable portion of the system, but other vessel operators may navigate through taxable and non-taxable waterways, including connecting deep draft waterways. Navigation on inland waterways is made possible by locks and dams, navigation structures and aids (such as buoys), as well as channel maintenance and dredging where necessary to maintain a minimum channel depth of 9 feet to support commercial barge traffic. Dams form the foundation of the inland waterways system and create “pools” for navigation during periods of low and medium river flow. Locks at dam sites allow river traffic to move up or down from one pool to another much like a stairway of water. See figure 2 below. As part of its Civil Works Program, the Corps operates and maintains the fuel-taxed inland waterways for the purpose of facilitating navigation. The Corps is responsible for balancing its navigation mission with other civil works missions, including hydropower generation, flood risk management, emergency response, environmental stewardship, and recreation (see fig. 3). For example, the Corps may consider the migration of fish when designing locks and dams that facilitate navigation. Congress appropriates funding for the Corps’ Civil Works Program. For inland waterways, the Corps uses funding for two main purposes: (1) inland waterways operations and maintenance and (2) inland waterways construction. From fiscal years 2006–2017 (the years for which data were available), the Corps obligated an average of $690 million annually for operations and maintenance of the fuel-taxed inland waterways. Funding for operations and maintenance is appropriated entirely from general revenues. Figure 4 shows annual obligations for inland waterways operations and maintenance for fiscal years 2006 through 2017. For construction projects, Congress appropriates funding from the Inland Waterways Trust Fund (Trust Fund) in addition to funds from general revenues. Since the Inland Waterways Revenue Act of 1978 (1978 Act), commercial waterway users have paid taxes on fuel used by commercial towboats and other vessels that typically move barges, revenues from which are deposited in the Trust Fund. The Water Resources Development Act of 1986 (1986 Act) increased the initial fuel-tax rate per gallon and established a cost-sharing process for inland waterways expenditures. Together, the 1978 Act and the 1986 Act established a means for the inland waterways industry to provide economic support for infrastructure development. These users currently pay a $0.29 per gallon tax on diesel fuel used on the fuel-taxed portion of the inland waterways, revenue from which is then deposited into the Trust Fund. Traditionally, 50 percent of a project’s funding is appropriated from general revenues and 50 percent is appropriated from the Trust Fund, though Congress reduced the Trust Fund’s cost share for the ongoing new construction of the Olmsted Locks and Dam project to 25 percent for fiscal year 2014 and to 15 percent for subsequent fiscal years. In fiscal year 2018, commercial waterway users contributed about 35 percent of the $399 million allocated to various construction projects (see fig. 5). On average, from fiscal years 1997 through 2018, the Corps has allocated about $240 million annually for construction to repair or improve existing inland- waterways navigation infrastructure. In its 2017 annual financial report, the Corps notes that the number of instances of lock closures on inland waterways (including the fuel taxed inland waterways) due to preventable mechanical breakdowns and failures lasting longer than one day and lasting longer than one week have decreased since fiscal year 2010, but that the lock closures that do occur can result in substantial delays to shippers, carriers, and users, and are a factor in the cost of shipping commodities on waterways. According to the Inland Waterways Users Board (Board)—an advisory committee made up of industry stakeholders—U.S. inland waterways infrastructure is in need of modernization. The Corps currently manages construction projects aimed at replacing, expanding, and modernizing existing locks and dams. For fiscal year 2018, the Corps has allocated about $399 million from money Congress appropriated for civil works construction for a total of five inland waterways construction projects: four ongoing projects and one new project (see fig. 6). According to the Board, as of December 2017, 14 new lock and dam construction projects have been authorized by Congress but have not yet received construction funding. In addition to the Corps and the Board, several entities have roles related to the inland waterways: The Assistant Secretary of the Army for Civil Works (ASA-CW): the ASA-CW establishes policy direction and provides supervision of the Department of the Army functions relating to all aspects of the Corps’ Civil Works program. Maritime Administration: within the Department of Transportation, the Maritime Administration promotes the use of waterborne transportation and its integration with other segments of the transportation system. It is also charged with maintaining the health of the merchant marine, since commercial mariners, vessels, and intermodal facilities are vital for supporting national security. The U.S. Coast Guard (Coast Guard): within the Department of Homeland Security, the Coast Guard is responsible for, among other things, facilitating the safe and efficient flow of commerce on the navigable waterways of the United States. For example, the Coast Guard regulates and enforces safety standards for inland waterways vessels and operator licensing, conducts icebreaking to facilitate the flow of commerce and relieve flooding from ice dams, and installs and monitors aids to navigation that mark the navigable channel (such as buoys, beacons, and lights) to facilitate the safe movement of vessels. The Coast Guard coordinates with the Corps to ensure aids-to- navigation are properly installed and makes adjustments as channel conditions may dictate. Office of Management and Budget (OMB): Within the Executive Office of the President, OMB oversees the implementation of the President’s policy, budget, management, and regulatory objectives. Related to inland waterways, OMB works with the Corps and the ASA-CW to formulate the annual President’s budget request and issues policies related to the budget’s implementation, project study, and prioritization. The Corps Allocates Funds for Operations and Maintenance Based on Economic Benefits and Risk but Lacks a Method of Tracking Deferred Maintenance for Inland Waterways The Corps Allocates Funds for Operations and Maintenance Projects Based on Economic Benefit and Risk As part of its management of the inland waterways, the Corps budgets for the costs of operations and maintenance (which are funded from one appropriation account) and construction (funded from a separate appropriation account) and develops an annual budget request to submit to OMB. The Corps develops this budget request for all its civil works activities, including locks and dams on the fuel-taxed inland waterways system; this request is reviewed and finalized by the ASA-CW and OMB before being submitted to Congress as part of the annual President’s budget request. To prepare its annual budget request, the Corps identifies potential operations activities and maintenance projects and submits estimates of the costs to complete those activities, but not all identified maintenance projects are included in the budget request. According to Corps officials, as part of the budget request development process, the Corps provides OMB and the ASA-CW with a variety of funding proposals that would enable different levels of service for all of its civil works assets, including inland waterways. However, according to Corps officials, the President’s budget request for civil works—including funding for inland-waterways maintenance projects—is based on broader administration priorities and does not request funding to complete all identified maintenance projects. The Corps then receives annual appropriations for its Civil Works Program, from which it allocates funding to each of its missions, including inland waterways navigation. Figure 7 illustrates the Corps’ budget formulation and execution process. In 2008, the Corps began implementing an asset management process to guide its management of the Civil Works Program, including inland waterways. Under this process, the Corps determines the hours of operation for each lock, which maintenance activities to perform, and which construction projects to prioritize based on the economic value these activities will provide. The Corps ranks maintenance projects identified during the budget formulation process based on the value or level of service the project is expected to provide as well as how critical they are and funds as many of the priority projects as possible given available funding and the rest are deferred. The Corps assesses the value of inland waterways assets (such as waterways, locks, and dams) based primarily on the economic benefits derived from improved commercial navigation—that is, the benefits achieved by allowing shippers to transport commodities to both domestic and foreign markets more cost effectively than they would using other modes of transportation (such as truck and rail). Economic benefits are generally determined using measures of commercial use, and assets are categorized as high, moderate, and low commercial use. The Corps’ approach to operations and maintenance is as follows: Operations: The Corps allocates funding for operations based on service priorities. The Corps operates locks at varying levels of service (i.e., hours of operation) based primarily on past commercial traffic volume, but also considering the volume of recreational traffic and available resources. The Corps operates high-use locks continuously (24/7), while operating those with less commercial traffic and fewer economic benefits less frequently, sometimes by appointment only. Maintenance: The Corps allocates funding for maintenance projects based on the risk of not performing maintenance; this risk is determined by considering both the condition of an asset as well as the economic impact of a reduction in service should the asset fail (that is, the traffic that would be affected if a lock or dam were to become unusable). Lack of a Deferred Maintenance Measure for Inland Waterways Limits the Corps’ Ability to Identify and Communicate Estimated Maintenance Costs According to Corps and ASA-CW officials, the Corps does not know how much deferred maintenance exists for inland waterways, because there is no agreed upon definition for deferred maintenance. Corps and ASA- CW officials identified several challenges related to developing a useful definition with which to measure deferred maintenance: Using the total cost to conduct all maintenance identified during the budget formulation process may not be useful as a budget tool because the Corps would not have the capacity to conduct all identified maintenance in one fiscal year. A single measure may not be useful to gauge the condition of the system, because not all deferred maintenance projects have the same effect on system reliability, for example: Some identified maintenance, such as preventive maintenance conducted less frequently than preferred (like painting lock components to prevent future corrosion), may not affect reliability or function in the short term. Deferring the replacement or rehabilitation of broken or malfunctioning components—such as a lock gate arm—on low use waterways may result in closures on those waterways or delays related to the condition of the lock, but would affect a relatively small amount of cargo and vessels and have a smaller economic impact than closures on high-use waterways. Deferring the replacement or rehabilitation of broken or malfunctioning components on high-use waterways may result in closures that prevent traffic to large sections of the inland waterways system and affect a large portion of cargo transported via waterways. Some deferred maintenance projects may never be undertaken, while others are planned for the near future. Corps officials told us that, depending on the risk associated with not completing a particular maintenance project, the Corps may choose to never complete the project, such as mowing the grass at a low-use lock and dam facility. Conversely, some incomplete projects represent later phases of projects that are already under way and are planned for completion in the near term. The lack of a definition and measure of deferred maintenance for inland waterways projects is inconsistent with federal internal-control standards, which call for agencies to identify information requirements needed to achieve objectives and address identified risks (such as reliability of the waterways) and to process relevant data to develop that information. Further, internal control standards call for agencies to communicate information externally—such as to Congress and OMB—to achieve agencies’ objectives. Corps and ASA-CW officials acknowledged that there is a lack of information on deferred maintenance provided to Congress. One Corps official suggested that the Corps may need more than one measure of deferred maintenance to capture differences in the type and consequences of various projects. Additionally, ASA-CW officials noted that once a meaningful definition or metric for deferred maintenance is identified, the Corps lacks a way to track this information. Without a measure—or measures—of deferred maintenance for inland waterways (1) that the Corps can use to budget for and manage the inland waterways, (2) that reflects its priorities, and (3) that accurately conveys a consistent and well-defined measure of deferred maintenance that can be communicated to outside stakeholders, the Corps is limited in its ability to identify preventive maintenance that could forestall more costly maintenance or rehabilitation in the future and communicate its estimated maintenance costs to OMB and the Congress. In turn, the lack of a measure could limit the ability of Congress to make informed funding decisions pertaining to the Corps. Both the stakeholders we interviewed and the Corps have identified effects on the reliability of the inland waterways related to current funding levels for operations and maintenance. For instance, many stakeholders we spoke to said the funding the Corps receives for operations and maintenance on inland waterways has not been sufficient to maintain the stakeholders’ desired level of reliability. Some stated that the Corps is currently operating using a “fix as fails” approach: that is, requesting enough funding to be able to respond to crises but not to conduct preventive maintenance. Further, many stakeholders said there is potential for some waterway users to switch to other modes of transportation based on unreliability. For instance, two stakeholders stated that businesses may be “chased away” because the inland waterways system continues to be unreliable due to unscheduled closures for maintenance. For example, during the course of our review, one lock on the Ohio River experienced repeated unscheduled closures. One such closure lasted from September 6, 2017, through September 14, 2017, during which time no vessels were able to travel through the lock. According to a June 2017 Corps report on the causes of mechanical breakdowns leading to unscheduled lock closures, routine maintenance occurs less frequently than in the past due to a lack of funding, and that delayed maintenance increases the risk of operational or catastrophic failure that results in lock closures. Figure 8 illustrates the condition of both deteriorating and recently rehabilitated inland waterways’ navigation facilities. Identifying and communicating about deferred maintenance could help Congress and OMB understand the extent of any problems with reliability that could affect the inland waterways system. Incremental-Funding Approach for Inland- Waterway Construction Projects Contributes to Cost Overruns and Schedule Delays Inland-Waterways Construction Projects Are Individually Funded according to Various Priorities The Corps manages inland-waterways construction projects—the modernization and rehabilitation of existing locks and dams (called major rehabilitation), or the construction of new structures—to ensure the facilities continue to function and meet future requirements, and prioritizes these projects based on expected costs and benefits. As shown in figure 9, construction projects are developed in response to an identified problem. Congress then authorizes inland-waterways construction projects for study and construction and provides funding through the annual appropriations process, although some authorized projects may not receive funding. Since 1996, Congress has appropriated construction funding that the Corps has allocated toward 20 projects, of which 15 have been completed. The Corps assesses the net economic benefits of inland-waterways construction-project alternatives by comparing estimated direct costs (e.g., construction costs to build a new lock chamber) to estimated reductions in the waterway transportation costs (e.g., reduced travel costs related to a reduction in the time it might take for a barge to pass through a larger lock chamber). For the Corps to recommend construction, the project must have a benefit-cost ratio—that is, the ratio of estimated benefits to estimated costs—greater than 1 to 1 using a statutorily defined discount rate that varies from year to year. The project must then be authorized for construction by Congress through legislation to be eligible for funding, which typically occurs in a Water Resources Development Act. The Corps—with advice from the Inland Waterways Users Board (Board)—prioritizes authorized inland-waterways construction projects according to estimated net economic benefits and an assessment of the economic and safety consequences of not doing the project. In collaboration with Corps headquarters, division, and district offices, the ASA-CW determines which civil works construction projects will be prioritized to include in the budget request to OMB. OMB considers the recommendations of the ASA-CW and the Corps in deciding which projects to include in the President’s budget request. While Corps projects with a benefit-cost ratio of at least 1 to 1 at the statutorily defined discount rate are eligible to seek funding, OMB assesses projects against a different threshold in determining which projects are included in the President’s budget request. In line with OMB practice since the mid- 2000s (and, according to OMB officials, consistent with their evaluation of most federal programs per their guidance set in 1992), generally only inland-waterways construction projects with a benefit-cost ratio of at least 2.5 to 1 using a 7 percent discount rate are included in the annual President’s budget request. In recent years, only one of the Corps’ ongoing construction projects—the Olmsted Locks and Dam project—has met this threshold. Congress appropriates funds to the Corps’ Civil Works construction account, and the Corps allocates some of that funding to inland- waterways construction projects. In recent years, Congress has appropriated funds for projects included in the President’s budget request and has directed the Corps to allocate appropriated amounts that exceed the amount requested in the President’s budget request to other projects as depicted in step 8 in figure 9. For example, in fiscal year 2018, the Administration requested $175 million for the Olmsted Locks and Dam project, but five projects were funded that year. In the Joint Explanatory Statement accompanying the appropriations, Congress directed the Corps to allocate funds to inland-waterways construction projects prioritized by economic effect in such a way that the Corps uses all estimated Trust Fund revenues. In accordance with this direction, the Corps allocated $399 million to inland-waterways construction projects, with more than half—$224 million— going toward the other three ongoing inland-waterways projects and a new major rehabilitation project (see fig. 10). Stakeholders we spoke to stated that the process for determining which construction projects receive funding can be challenging. Some stated that the use of different discount rates and benefit-cost ratio thresholds for authorization and budgeting purposes can create confusion as to whether projects will be funded. Also, some stakeholders stated that because the 7 percent discount rate used by OMB to calculate the benefit-cost ratio is higher than the statutory rate used in recent years, use of the OMB discount rate can result in projects being excluded from the President’s budget request, an exclusion that can reduce the likelihood of the project receiving funding. According to the Board, as of December 2017, 14 construction projects have been authorized for construction but have not been allocated construction funding, and an additional 7 major rehabilitation projects are also candidates for construction over the next 20 years. However, Corps officials stated that, once the Olmsted Locks and Dam project is completed, none of the currently authorized projects will meet OMB’s threshold for inclusion in the President’s budget request. Further, some stakeholders told us that the Corps’ policy—developed to provide additional information to OMB during budget development—to recalculate a project’s benefit-cost ratio every few years, including while the project is under construction, can create challenges. For one, ongoing projects included in the President’s budget request have subsequently been excluded in later years due to a lower updated benefit-cost ratio, which might reduce the likelihood of the project’s being allocated funding. For example, the Lower Monongahela Locks and Dams project had a benefit-cost ratio of 6.7 to 1 at a 7.75 percent discount rate when construction funds were first expended in fiscal year 1995 (based on benefits and costs as estimated when the project was authorized in fiscal year 1992) and has been allocated funding every year since. However, this project was not included in either the fiscal year 2017 or 2018 President’s budget requests due in part to its updated benefit-cost ratio having fallen below the 2.5 to 1 threshold because of increased costs and changes to the expected benefits. Although it was not included in the President’s budget request, the Corps ultimately allocated funding for the project in fiscal years 2017 and 2018 based on congressional direction. Incremental Funding of Inland-Waterways Construction Projects Contributes to Cost Overruns and Schedule Delays Since at least 1995, all inland-waterways construction projects have been funded incrementally, meaning that annual appropriations have covered a portion of the project’s estimated costs. There are several reasons that the Administration may request and Congress may appropriate funding for inland-waterways construction projects incrementally—as they both have done in recent years—in lieu of full upfront funding. Available annual funding is generally less than the amount required to cover the full cost of one new construction project. In addition, the Corps (like other federal agencies) cannot enter a contract that exceeds available funding unless authorized by law. For example, based on average annual Trust Fund revenues since 2015 of about $107 million, a 50-50 cost share would provide about $214 million in construction funding annually, whereas the four ongoing construction projects were each originally estimated to cost more than that amount. Further, of the 10 new construction projects prioritized to be completed next in the Corps’ capital investment strategy, as of 2016, 7 of them are estimated to cost at least $350 million. Additionally, these projects—even once begun—must compete annually with other funding priorities across the federal government. We have previously reported that full upfront funding of capital assets can be challenging to obtain in an era of resource constraints; incremental funding can make it easier for agencies to meet mission capital demands within the constraints of their appropriation. Further, while the Corps could carry over appropriations until they accrue sufficient funds to fully fund a project upfront (because their construction appropriations historically have not expired), Corps officials we spoke to had concerns about this practice. They stated that carryover funds may be seen as available and reprogrammed to other civil works efforts (such as rebuilding infrastructure in the wake of a natural disaster) and that Congress and the Board both expect the Corps to obligate appropriated funds. In addition, some stakeholders had concerns that delaying the start of construction until full upfront funding was appropriated could result in further deterioration or increased maintenance costs for those facilities. Finally, according to some stakeholders we spoke to, the current incremental funding approach has allowed construction projects on multiple waterways to occur at once—a way of spreading benefits across the system and providing some indication to local users and beneficiaries that their local facility will be repaired or replaced. Nonetheless, incremental funding for inland waterways projects—among other factors such as engineering design changes—has contributed to increased costs and schedule delays because it results in inefficient contracting practices. Corps reports and academic studies have found that incremental funding has resulted in inefficient contracting for construction projects, in part because funding is not guaranteed beyond the current year and contractors must stop working once funds are exhausted. Because the Corps receives annual appropriations for a portion of the total estimated cost of a project, the Corps awards contracts for separable elements that can be constructed and left for a period of time with minimal damage and safety risks if further funding is unavailable (such as a contract to build part of a lock wall). According to Corps district officials, this practice has resulted in the Corps entering into many more contracts for each project than they would if they had full upfront funding. For example, Corps officials told us that due to incremental funding, the Lower Monongahela Locks and Dams project is currently on its 14th construction contract even though it was originally planned to be completed using only two contracts. Corps officials told us that this contracting practice is inefficient and can lead to cost overruns due to, for example: contractor mobilization and demobilization, such as moving heavy equipment on and off the construction site, at the beginning and end of each contract; prolonged construction due to multiple contractors unable to work at the same worksite during the same time; extra administrative expenses associated with letting multiple increased cost of fuel and construction materials (e.g., steel and cement) from year to year; higher costs of buying construction materials in smaller quantities; inflation due to prolonged construction. Further, according to Corps officials and stakeholders, additional challenges related to the timing and amount of funding allocated in a given fiscal year can exacerbate inefficiency related to incremental funding. For example, while under a continuing resolution, the Corps does not allocate funding to projects that were not included in the President’s budget, per OMB policy, which can delay funding for projects until Congress provides appropriations for the remainder of the fiscal year. Thus, in fiscal year 2018, funding was delayed for the three ongoing projects that were not included in the President’s budget request. Although project work can continue if the Corps has some carryover funds, Corps officials told us that, if they exhaust their funds, a continuing resolution could mean they won’t be able to exercise the next option on a construction contract. As a result, the contractor would have to stop work and shut down the construction site, and the Corps would need to close the existing contract, repackage the remaining work, and re- advertise the contract—all tasks that can increase the full cost of a project. Additionally, according to Corps officials, when projects receive smaller portions of funding than estimated for the upcoming fiscal year, the amount may not be enough to allow a contractor to continue on the most efficient construction schedule for that contract or contract option which can have the effect of increasing costs. Moreover, according to Corps district officials, the benefit-cost ratios for some ongoing projects have decreased in recent years in part because the projects have experienced increased costs (relative to expected benefits) due to a number of factors, including inefficient contracting stemming from incremental funding, which may affect the project’s priority status and inclusion in the President’s budget request. All four of the Corps’ ongoing construction projects have experienced cost overruns and, as shown in figure 11, schedule delays. According to Corps officials, some of these cost increases and delays were due to inefficient contracting stemming from incremental funding. For example, Corps officials currently expect that the Kentucky Lock Addition project will require at least $229 million more (about 19 percent above the original estimated cost) as a direct result of inefficient contracting and be completed 17 years later than planned. Similarly, the Corps estimates that the Chickamauga Lock project will need at least $170 million more (about 24 percent above the original estimated cost) due to inefficient contracting and be completed at least 13 years later than planned. The amount of estimated cost overruns for just these two projects could potentially fund an entire additional project. Timing and Distribution of Funding Could Reduce Cost Increases and Schedule Delays for Inland-Waterways Construction Projects In the absence of full funding, our funding simulation demonstrates that contracting efficiency for inland-waterways construction projects could be increased by funding fewer projects at a time. We developed a simulation for a set of four hypothetical new construction projects under different funding approaches to explore the effects of different funding patterns and timing on total project costs and timeframes. We assumed that all four hypothetical projects could be completed for $2 billion ($500 million each, with expected funding of $100 million per year) within 5 years of construction. For our simulation, we assumed that $200 million would be available to allocate each year across the four projects—an amount roughly similar to recent funding levels for actual inland waterways projects. We developed five funding approaches that varied in the pattern and timing of funding allocated toward each project. Given these patterns of funding, we also incorporated cost effects that we hypothesized would occur. For example, for each year that a project did not receive full funding—that is, the entire remaining costs of the project were not provided—we assumed the remaining funding required to complete the project would increase to account for contracting inefficiencies that were likely to occur due to incremental funding, such as increased contractor mobilization and demobilization. Also, for any year that a project received funding in smaller amounts than expected, we assumed that funding required to complete the project would rise due to exacerbated contract inefficiencies due to such factors as having to buy materials in smaller quantities or break work into smaller separable elements. In addition, we incorporated inflation into projects’ remaining costs when funding for those projects was delayed. See appendix III for more detailed information regarding our methodology for this simulation. While fully funding projects up front would help to avoid cost increases or delays due to inefficient contracting, we found that, even with incremental funding, varying the timing and amount of funding can reduce inefficiency (see fig. 12). For example, we found that compared to other approaches, an incremental funding approach that concentrates all available funding to one of the four projects at a time—as in Approach A, shown in figure 12— results in lower cost overruns and faster construction than an approach that funds more projects simultaneously with smaller amounts of funding, as in Approach B (see app. III for results for all five approaches). In addition, concentrating funding toward one project could lead to greater years of benefits—as measured by the Corps as the number of years a facility has been constructed and available for use by vessels. However, according to Corps officials and stakeholders we spoke to, there may be risks associated with concentrating funding on one project at a time due to concerns with delaying the start of other high priority projects. For example, during the time in which a project is waiting for funding, the infrastructure may experience further deterioration, and vessels using the facility may experience increased delays. Corps officials we spoke to about this simulation generally agreed that the Corps’ current funding approach most closely resembles Approach B, with most funding going to the Olmsted Locks and Dam project while the remaining three ongoing projects receive smaller amounts (see also fig. 6). OMB and GAO have advocated for full upfront funding of capital projects as a way to recognize full budgetary commitments, but, as discussed, fiscal pressures on both the Corps and Congress may make it difficult to request and appropriate full funding. OMB’s Capital Programming Guide states that full funding can help ensure that all costs and benefits are taken into account at the time decisions are made to provide resources, increase the opportunity to use more competitive contracts, and allow for more efficient work planning. Further, we have previously reported that full funding is an important tool for maintaining government-wide fiscal control, because failure to recognize the full costs of proposed commitments during budget decisions could lead to distortions in the allocation of resources. We have also reported that incremental funding of capital projects can reduce available funding for future projects and erodes future program flexibility because funding is dedicated to projects begun in previous years. Though providing full upfront funding would likely reduce the overall costs of inland waterways construction over the long term, it may require a significant increase in annual appropriations in the short term, which Corps officials consider to be highly unlikely. Both OMB and GAO have acknowledged the challenges associated with “spikes” in appropriations that would be required for full funding and have suggested that innovative funding mechanisms could be used to mitigate this challenge. In 2010, we recommended that the Corps work with Congress to develop a more stable project-funding approach for Civil Works projects that provides more efficient use of funds, but the Department of Defense only partially concurred with the recommendation, stating that it will support budget decisions made by the administration. However, without some change in the way inland-waterways construction projects are funded to either provide full funding or reduce the effects of incremental funding by concentrating on fewer projects at one time, current cost increases and schedule delays resulting from inefficient contracting are likely to continue. For example, according to the Corps’ 2016 capital investment strategy, under a scenario in which construction funding is limited only by available Trust Fund revenues, in the next 20 years the Corps could complete 16 of the 22 major rehabilitation and new construction projects identified as priority projects for approximately $7 billion; however, because these estimates do not account for cost overruns due to the current incremental funding approach, the Corps is unlikely to meet this goal. Stakeholders Identified Limitations and Trade-offs Associated with Proposed Options for Increasing Available Funding for Inland- Waterways Construction In addition to adjusting the timing and distribution of funding, according to some of the stakeholders we interviewed, increasing available funding for construction would provide more upfront funding to enable more efficient contracting. Stakeholders said that with additional funding, the Corps may be able to complete ongoing inland waterways projects more quickly and begin other construction projects. We asked stakeholders representing 55 national and regional entities and researchers about options to increase available funding for inland waterways construction that have been proposed by policymakers and in relevant literature including: altering the cost share between the Trust Fund and federal requiring other users and beneficiaries of the waterways to contribute to the Trust Fund, increasing or adding fees for commercial users, expanding opportunities for local sponsors to contribute to funding pursuing alternative financing arrangements. While each option has potential benefits, stakeholders we interviewed identified limitations or trade-offs that could affect the feasibility of each option. Altering the Trust Fund cost share. Altering the percentage of the Trust Fund cost share for construction projects could increase available funding to complete construction projects. For example, in 2014 the Trust Fund’s cost share for the Olmsted Locks and Dam project was reduced by statute from 50 to 25 percent for fiscal year 2014, and to 15 percent for subsequent fiscal years—thereby increasing the federal share to 85 percent—to speed the pace of other inland-waterways construction projects (by increasing the overall funding available for those projects) and to reduce the costs to commercial users. The Inland Waterways Users Board (Board), in its April 2018 annual letter to Congress, proposed making such a change for all future projects. Specifically, the Board proposed increasing the federal government’s share of construction costs from 50 percent to 75 percent. According to the Board and some stakeholders, this could increase the available funding for Corps construction projects on the inland waterways system. Because each Trust Fund dollar would be matched by three dollars from general revenues as opposed to one dollar under a 50/50 split, overall funding may be increased. The Board stated that this approach may also enable the Corps to start and complete projects more quickly. For example, as shown in figure 12, with more upfront funding available for each project, the Corps may be able to contract for projects more efficiently than if it received smaller amounts of funding each year. However, some stakeholders said additional appropriations for inland waterways construction from general revenues would be required to achieve the benefits of this option, an approach that could, in turn, reduce funding available for other congressional priorities or increase the federal deficit. Absent additional appropriations, however, the amount of funding for construction could be reduced. For example, if appropriations from general revenues were $100 million per year under both scenarios, total funding for inland waterways under a 75/25 split would be only about $133 million, instead of $200 million under the traditional 50/50 split. To provide the same $200 million for construction, but reduce the costs to commercial users under a 75/25 split, appropriations from general revenues would need to increase to $150 million. Require other users and beneficiaries of the waterways to contribute to the Trust Fund. Some stakeholders we spoke to proposed requiring that other users of the waterways contribute to the Trust Fund. Recreational boaters, municipal water utilities, and hydropower utilities already pay fees associated with their use of inland waterways, but this revenue is not directed toward the Trust Fund, for example: recreational users, such as recreational boaters and fishermen, on all waterways pay fees of about $628 million annually on fishing equipment and taxes on fuel used in motorboats that are currently deposited into the U.S. Fish and Wildlife Sport Fish Restoration and Boating Trust Fund, which is used to sustain sport-fishing populations; municipal water utilities that have Corps’ water storage contracts on the inland waterways pay fees that are currently deposited into the general fund of the Treasury; and power generated by federally owned hydroelectric dams (including those owned by the Corps on the inland waterways) is sold at rates intended to cover the government’s costs of operating and maintaining the dams, among other things. Other infrastructure trust funds are supported in part through user fees paid by both commercial and non-commercial users. For example, excise taxes, primarily on motor fuels and commercial trucks and tires, are deposited into the Highway Trust Fund, which is used to provide grants to state highway or transportation agencies. Some stakeholders said that all users who benefit from the pools created by navigation dams should bear some portion of the costs of the infrastructure, and revenue collected from these users could potentially be redirected to the Trust Fund. However, some other stakeholders said that these users as well as U.S. taxpayers that do not use the waterways already contribute to inland waterways construction, operations, and maintenance costs through their federal tax contributions to general revenues. We have previously found that in theory, the extent to which a program is funded by user fees should generally be guided by who primarily benefits from the program; however, the extent to which a program benefits users or the general public is not usually clear cut. In addition, redirecting revenue from fees currently paid by other users of the waterways to inland waterways would reduce funding available for other congressional priorities, as these funds are currently being directed towards other uses. Increasing or adding fees for commercial users. Past administrations as well as entities such as the Congressional Budget Office have proposed increasing revenue for inland waterways construction by increasing existing fees or imposing additional fees, such as lockage fees, for commercial users of the inland waterways—the only group that is currently paying the fuel tax—as they are the primary beneficiaries. For instance, in a legislative proposal accompanying the fiscal year 2019 President’s budget request, the current administration proposed increasing the number of waterways subject to the fuel tax, which could have the effect of increasing the amount some users pay or increasing the number of commercial users subject to the tax. However, some stakeholders pointed out that increasing or adding fees for these users would raise the costs of transportation on the waterways, which could lead shippers to switch to other modes of transportation (such as trucks and rail, which are less efficient) and ultimately reduce both waterways traffic and Trust Fund revenue. Specific proposals for increasing or adding to existing fees are described in more detail below. Index fuel tax to inflation: Two stakeholders said that indexing the fuel tax to inflation could help the Trust Fund retain its purchasing capability over time. In fiscal year 1994, the fuel tax was set at $0.20 per gallon, and it was not raised again until 2015, when Congress increased the tax to $0.29 per gallon with the support of commercial users–close to the inflation adjusted-level of the 1994 rate. However, the rate was not set to automatically rise with future inflation, which reduces the purchasing power of the fuel tax over time. For example, according to our analysis of fuel tax revenue for 1994–2014, if the fuel tax had been indexed to inflation as of 1994, about $400 million in additional revenues would have been raised over the 20-year period between 1994 and 2014. If the additional $400 million were matched by general revenues dollar for dollar, a total of $800 million more would have been available to the Corps for construction projects. Annual vessel fee: Citing the insufficiency of existing revenue to pay the users’ share of capital investment costs, the current administration has proposed a new annual per vessel fee for commercial users to help finance future construction projects and cover a portion of the cost of operating and maintaining them (operations and maintenance has historically been a federal responsibility). The current administration expects this fee would raise approximately $1.78 billion in new revenue from fiscal years 2019–2028 ($178 million annually) to supplement revenue from the existing fuel tax. In its annual letter to Congress, the Board said this proposal is similar to what the prior administration proposed and that Congress has repeatedly rejected because it would more than double the amount collected from commercial users of the inland waterways system each year, with associated consequences for shipping costs and traffic diverted to other modes. Lockage fees: Various groups have proposed collecting lockage fees from commercial users to tie fees more closely to use of the infrastructure and increase available funding. For example, prior administrations’ budget proposals have recommended replacing or supplementing the fuel tax with lockage fees. According to the Transportation Research Board, lockage fees could increase available funding for construction, are moderately easy to administer, and could be implemented on a system-wide basis, with lock operators keeping track of lock use. However, some stakeholders stated that the relative unknowns of how a lockage fee would be implemented make it less appealing than the current, familiar fuel tax, which they are able to incorporate into their operating budgets. Additionally, some stakeholders told us adding lockage fees—just like increasing the fuel tax or adding other fees—would increase shipping costs, and could reduce traffic on the inland waterways. Further, some stakeholders raised concerns about the equity of lockage fees, as all users benefit from the system as a whole, but not all users frequently pass through locks. For example, as one stakeholder pointed out: the Mississippi River has zero locks and dams from St. Louis to New Orleans, so users that operate chiefly on that part of the system may not need to pay lockage fees. As such, lockage fees would affect some commercial users more than others: if the fuel tax were replaced with lockage fees, some users (those that do not routinely pass through locks, but benefit from the pools created) may ultimately pay much less than they currently do, while others (those operating on areas of the system with a high number of locks) would pay much more. Expanding the use of contributed funds. Expanding the Corps’ authority to allow local sponsors—generally state and local governments or interstate agencies—to contribute to the costs of project construction, as is the case for other types of water resource projects, could increase available funding. The Water Resources Reform and Development Act of 2014 established a pilot project that enabled the Corps to accept contributed funds from nonfederal interests to pay for the costs of operating inland waterways facilities but does not allow such contributions for maintenance or construction. Some stakeholders said expanding the current use of contributed funds for operations expenses by enabling local sponsors to contribute funds for construction could potentially benefit some communities and increase available funding. However, the costs for construction and maintenance of facilities on high-use waterways would likely be too high for local sponsors to offset. Moreover, we have reported that state and local governments face long-term fiscal pressures, which may limit their ability to contribute to costs for navigation locks in their jurisdictions. Pursuing alternative-financing arrangements. The current administration and others have proposed alternative-financing options that could enable the Corps to leverage either private capital or other available funds in order to provide full upfront funding for inland waterways construction projects. Numerous proposals call for the Corps to leverage private capital, such as public-private partnerships or debt financing, to access full funding at the beginning of inland-waterways construction projects. The Water Resources Redevelopment Act of 2014 authorized the Corps to implement pilot programs to explore the use of debt financing, such as low interest loans provided under the Water Infrastructure Finance and Innovation Act of 2014, and public-private partnerships for civil works water resources projects. Similarly, the current administration’s 2018 Legislative Outline for Rebuilding Infrastructure in America proposes authorizing the Secretary of the Army to execute agreements with non-federal public or private entities for civil works water resources construction projects. While some stakeholders stated that alternative-financing arrangements could increase available funding for inland-waterways construction projects, they were unsure of whether these agreements would work in practice. According to some stakeholders, public-private partnerships and debt-financing would provide upfront funding with an expectation of either a profitable return to a private equity partner or repayment of debt; however, according to some stakeholders, there is limited interest in entering into these financing arrangements among private sector investors because there is no clear and viable revenue stream to provide such returns. For instance, some stakeholders told us that increasing fees for commercial users to provide a revenue stream could have the effect of reducing traffic on waterways, which would reduce the revenue potential of fees. Alternative-financing arrangements would also require congressional action to implement. Specifically, depending on the structure of these financing agreements, alternative financing would require legislative changes, which could include granting the Corps authority to: (1) enter into public-private partnerships, (2) use debt financing, (3) use contract authority to obligate funding beyond what is appropriated in a given year, or (4) collect and retain revenue such as lockage fees. While the Water Resources Reform and Development Act of 2014 authorized the pilot programs to explore the use of public private partnerships and debt financing, Corps officials told us that they cannot enter into agreements of this type without specific appropriations, which they have not yet received. Corps officials stated that they are currently developing a high level policy to provide general direction about the use of alternative financing but according to them, the lack of a clear revenue source may make it more difficult to execute alternative-financing strategies that include private partners for inland waterways infrastructure. In contrast, the President recently proposed establishment of a Federal Capital Revolving Fund, which could enable federal agencies to access full upfront funding for certain construction projects without leveraging private capital. According to the proposal, the revolving fund would transfer funding to agencies to finance large-dollar real-property capital projects designated in appropriations acts if the project receives an appropriation for the first of a maximum of 15 required annual repayments. If those conditions are met, the revolving fund would transfer funds to agencies to cover the full cost to acquire the capital asset—in the case of inland waterways, the full cost to construct the project. Purchasing agencies would repay the fund using annual appropriations— for inland waterways, this approach likely would mean that repayments could be made using appropriations from either the Trust Fund or general revenues. While Corps inland-waterways construction projects would not be eligible for funding under this proposal, this type of approach to alternative financing could potentially be used to enable the Corps to contract for inland waterways construction more efficiently. However, only projects included in the President’s budget request would be eligible to receive this funding. At present, only one inland waterway project— Olmsted Locks and Dam—meets that requirement, and the Corps does not anticipate other authorized projects meeting the current benefit-cost ratio threshold for inclusion. Congressional action would be required to implement the proposed Federal Capital Revolving Fund, as well as authorize eligibility of inland-waterways construction projects, or a separate fund that would include Corps infrastructure projects. Conclusions The inland waterways are a critical component of the nation’s freight transportation system, and the Corps must manage the system within the context of competing priorities and limited resources. To effectively manage those resources, the Corps must accurately identify, assess, and communicate its priorities for operations, maintenance, and construction funding. The Corps cannot quantify deferred maintenance for inland waterways because it lacks a definition and measure (or measures) of deferred maintenance that reflects priorities and how deferral will affect system reliability. As such, the Corps is unable to clearly communicate its funding needs related to operating and maintaining the inland waterways. As with many federal programs, the Corps manages inland-waterways- construction and major-rehabilitation projects within some fundamental constraints, including available Trust Fund revenue, which is less than the amount that would be needed to fully fund the estimated costs of any of the four ongoing new construction projects. Accordingly, Congress and the President have instead incrementally funded multiple construction projects at a time. However, this incremental-funding approach can lead to construction delays and increasing costs. As a result, other priority projects cannot be started, construction backlogs grow, and delays and closures continue to affect vessels at locks and dams that continue to deteriorate while waiting for replacement or rehabilitation. The Corps’ capital investment strategy identifies an approach to funding priority projects given estimated Trust Fund revenue, but given the constrained fiscal environment and the unpredictable nature of the annual appropriations process, cost increases and schedule delays are likely to continue. Should Congress decide that additional funding is warranted to reduce this inefficiency, our report includes several options stakeholders have identified for doing so, such as increasing the federal share of construction costs for these projects. In the absence of increased funding, however, stakeholders we spoke to identified actions the Corps could take in coordination with Congress to increase the efficiency of contracting for inland waterways projects. The Corps could explore changes—such as sequencing project construction or legislative changes to enable more upfront funding prior to starting construction, among other options discussed in this report—that would enable the Corps to contract for inland waterways construction in a more efficient way. However, all of the options we discuss have important policy trade-offs and other challenges that the Corps and Congress would need to carefully consider. Recommendations for Executive Action We are making the following two recommendations to the Corps: The Chief of Engineers and Commanding General of the U.S. Army Corps of Engineers should define and measure deferred maintenance for inland waterways in a way that enables the Corps to clearly communicate estimated costs for maintenance needs. (Recommendation 1) The Chief of Engineers and Commanding General of the U.S. Army Corps of Engineers should pursue ways to increase the Corps’ ability to use available funding for inland waterways construction more efficiently and, should changes to the Corps’ authority be necessary, develop a legislative proposal to request such authority. (Recommendation 2) Agency Comments We provided a draft of this report to the Secretaries of Defense, Transportation, and Homeland Security and the Director of the Office of Management and Budget for review and comment. The Department of Defense provided written comments that are reprinted in appendix V; the department concurred with our recommendations. The Department of Homeland Security and Office of Management and Budget provided technical comments, which we incorporated as appropriate. The Department of Transportation had no comments on the draft report. We are sending copies of this report to appropriate congressional committees; the Secretaries of Defense, Transportation, and Homeland Security; and the Director of the Office of Management and Budget. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or VonAhA@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this report are listed in appendix VI. Appendix I: Inland and Intracoastal Fuel- Taxed Waterways of the United States 1. Alabama-Coosa Rivers: From junction with the Tombigbee River at river mile (hereinafter referred to as RM) 0 to junction with Coosa River at RM 314. 2. Allegheny River: From confluence with the Monongahela River to form the Ohio River at RM 0 to the head of the existing project at East Brady, Pennsylvania, RM 72. 3. Apalachicola-Chattahoochee and Flint Rivers (ACF): Apalachicola River from mouth at Apalachicola Bay (intersection with the Gulf Intracoastal Waterway) RM 0 to junction with Chattahoochee and Flint Rivers at RM 107.8. Chattahoochee River from junction with Apalachicola and Flint Rivers at RM 0 to Columbus, Georgia at RM155 and Flint River, from junction with Apalachicola and Chattahoochee Rivers at RM 0 to Bainbridge, Georgia, at RM 28. 4. Arkansas River (McClellan-Kerr Arkansas River Navigation System): From junction with Mississippi River at RM 0 to Port of Catoosa, Oklahoma, at RM 448.2. 5. Atchafalaya River: From RM 0 at its intersection with the Gulf Intracoastal Waterway at Morgan City, Louisiana, upstream to junction with Red River at RM 116.8. 6. Atlantic Intracoastal Waterway: Two inland waterway routes approximately paralleling the Atlantic coast between Norfolk, Virginia, and Miami, Florida, for 1,192 miles via both the Albermarle and Chesapeake Canal and Great Dismal Swamp Canal routes. 7. Black Warrior-Tombigbee-Mobile Rivers: Black Warrior River System from RM 2.9, Mobile River (at Chickasaw Creek) to confluence with Tombigbee River at RM 45. Tombigbee River (to Demopolis at RM 215.4) to port of Birmingham, RM’s 374-411 and upstream to head of navigation on Mulberry Fork (RM 429.6), Locust Fork (RM 407.8), and Sipsey Fork (RM 430.4). 8. Columbia River (Columbia-Snake Rivers Inland Waterways): From the Dalles at RM 191.5 to Pasco, Washington (McNary Pool), at RM 330, Snake River from RM 0 at the mouth to RM 231.5 at Johnson Bar Landing, Idaho. 9. Cumberland River: Junction with Ohio River at RM 0 to head of navigation, upstream to Carthage, Tennessee, at RM 313.5. 10. Green and Barren Rivers: Green River from junction with the Ohio River at RM 0 to head of navigation at RM 149.1. 11. Gulf Intracoastal Waterway: From St. Mark’s River, Florida, to Brownsville, Texas, 1,134.5 miles. 12. Illinois Waterway (Calumet-Sag Channel): From the junction of the Illinois River with the Mississippi River RM 0 to Chicago Harbor at Lake Michigan, approximately RM 350. 13. Kanawha River: From junction with Ohio River at RM 0 to RM 90.6 at Deepwater, West Virginia. 14. Kaskaskia River: From junction with Mississippi River at RM 0 to RM 36.2 at Fayetteville, Illinois. 15. Kentucky River: From junction with Ohio River at RM 0 to confluence of Middle and North Forks at RM 258.6. 16. Lower Mississippi River: From Baton Rouge, Louisiana, RM 233.9 to Cairo, Illinois, RM 953.8. 17. Upper Mississippi River: From Cairo, Illinois, RM 953.8 to Minneapolis, Minnesota, RM 1,811.4 18. Missouri River: From junction with Mississippi River at RM 0 to Sioux City, Iowa, at RM 734.8. 19. Monongahela River: From junction with Allegheny River to form the Ohio River at RM 0 to junction of the Tygart and West Fork Rivers, Fairmont, West Virginia, at RM 128.7. 20. Ohio River: From junction with the Mississippi River at RM 0 to junction of the Allegheny and Monongahela Rivers at Pittsburgh, Pennsylvania, at RM 981. 21. Ouachita-Black Rivers: From the mouth of the Black River at its junction with the Red River at RM 0 to RM 351 at Camden, Arkansas. 22. Pearl River: From junction of West Pearl River with the Rigolets at RM 0 to Bogalusa, Louisiana, RM 58. 23. Red River: From RM 0 to the mouth of Cypress Bayou at RM 236. 24. Tennessee River: From junction with Ohio River at RM 0 to confluence with Holstein and French Rivers at RM 652. 25. White River: From RM 9.8 to RM 255 at Newport, Arkansas. 26. Willamette River: From RM 21 upstream of Portland, Oregon, to Harrisburg, Oregon, at RM 194. 27. Tennessee-Tombigbee Waterway: From its confluence with the Tennessee River to the Warrior River at Demopolis, Alabama. Appendix II: Inland Waterways Stakeholders GAO Interviewed Appendix II: Inland Waterways Stakeholders GAO Interviewed Entity American Association of State Highway and Transportation Officials American Society of Civil Engineers Big River Coalition (New Orleans) Gulf Intracoastal Canal Association (New Orleans) Illinois Corn Growers Association (Rock Island) National Grain and Feed Association Pacific Northwest Waterways Association (Walla Walla) River Industry Action Committee (Rock Island) Warrior-Tombigbee Waterway Association (Mobile) Waterways Association of Pittsburgh (Pittsburgh) Waterways Council, Inc. Archer Daniels Midland Company (Rock Island) Campbell Transportation Company, Inc. (Pittsburgh) Canal Barge Company, Inc. (New Orleans) Channel Shipyard Companies (New Orleans) Cooper Marine & Timberlands Corp (Mobile) J. Craig Stepan, formerly of U.S. Steel (Mobile) Parker Towing Company (Mobile) Shaver Transportation (Walla Walla) Tidewater Barge Lines (Walla Walla) Turn Services (New Orleans) Arkansas Waterways Commission (Little Rock) Little Rock Port Authority (Little Rock) The Port of New Orleans (New Orleans) The Port of Pittsburgh Commission (Pittsburgh) Washington Grain Commission (Walla Walla) Alabama Scenic River Trail (Mobile) Allegheny River Development Corporation (Pittsburgh) Stakeholder Type Entity Boat Owners Association of the United States (BoatUS) Little Rock Yacht Club (Little Rock) Upper Monongahela River Association (Pittsburgh) Allegheny County Sanitary Authority (Pittsburgh) Clarksville Light & Water Company (Little Rock) Southwestern Power Resources Association (Little Rock) C. James Kruse, Texas A&M University Chris Hendrickson, Ph.D., Carnegie Mellon University Craig Philip, Ph.D., Vanderbilt University Dennis Lambert, COWI Marine North America Edward Dickey, Ph.D., Dawson & Associates Gary Loew, Dawson & Associates Jill Jamieson, Jones Lang LaSalle Leonard Shabman, Ph.D., Resources for the Future Paul Bingham, Economic Development Research Group, Inc. B. Starr McMullen, Ph.D., Oregon State University Stephen Fitzroy, Ph.D., Economic Development Research Group, Inc. Appendix III: Technical Appendix for GAO’s Funding Simulation for Inland-Waterways Construction Projects To illustrate the effects associated with the current-funding approach, which was consistently discussed as a challenge in interviews with agency officials and stakeholders, we developed a funding simulation for hypothetical projects using assumptions that were anchored in findings from a 2008 Corps study on factors contributing to cost increases for inland-waterways construction projects. This funding simulation was intended to demonstrate the effects of the pattern and timing of funding on total project costs and construction schedules. To inform our assumptions, we analyzed the results of the Corps study, which examined three inland-waterways construction projects and identified the many factors that contributed to cost increases and schedule delays for each project. One of the factors the report identified that led to higher funding requirements (that is, cost overruns) was inefficient contracting driven by the amount and timing of funding provided to each project. We developed five hypothetical scenarios that represent different funding approaches of a set of four identical construction projects (including a control scenario in which full upfront funding for all projects is available) based on the following information: each project requires $500 million in funding; each project takes 5 years to construct if it is fully funded with $500 absent full upfront funding, projects were structured to expect funding of $100 million per year for the project; once started, funding is not interrupted over the period of our total amount of available funding to fund these projects is $200 million per year; and; the number of years the projects provide benefits—that is, the number of years a facility has been constructed and is available for use by vessels—varies within the period of time selected for the simulation (2020 through 2034). To illustrate the effects of the different funding approaches on total project costs and time frames, we made assumptions about the effect of various funding structures on total funding requirements. These assumptions were informed by our review of the findings of the Corps’ study related to the effects of incremental funding and discussions of these issues with Corps officials. These assumptions include: Remaining required project funding was assumed to increase by 2 percent each year due to inefficient contracting that results from less than full upfront funding—that is, if the full $500 million of estimated project funding is not provided in year 1. Remaining required project funding was also assumed to increase by 0.5 percent each year if projects received less funding than is expected in a given year (less than $100 million) due to exacerbated project-contracting inefficiencies. An increase of 2 percent per year of remaining required project funding was applied if the project’s start was delayed beyond its intended starting year due to inflation. We applied increases to funding requirements where appropriate under the five different funding approaches: Approach A: Fund One Project at a Time—Funding only one project at a time with all available funding ($200 million). Once the first project has been fully funded, all available funding is provided to the second project, and so on. Approach B: Fund Multiple Projects at Different Amounts—Funding one project at a time at the expected level—that is, at $100 million each year until it is finished—then dividing remaining available funding equally to the remaining three projects. After the first project is complete, the second project receives $100 million each year until completion and the remaining funding is divided evenly, and so on. Approach C: Fund Two Projects at a Time—Available funding is divided among two projects; two projects receive funding at the expected level ($100 million) and the start of funding for the remaining projects is delayed until the first 2 are completed. Approach D: Delay Construction to Fully Fund One Project at a Time—Full upfront funding for one project at a time: allocation of funds is delayed until the entire remaining funding required ($500 million plus increases due to inflation) is available. Approach E: Fund Multiple Projects Equally—Equally funding all four projects at once: since the overall budget is $200 million, each project is funded at $50 million per year. We found that the timing and amount of incremental funding resulted in varying degrees of cost overruns (see fig. 13). In addition, the different funding approaches led to varying years of benefits—as measured by the Corps as the number of years a facility has been constructed and available for use by vessels—counted over a 15-year span of our simulation. This variation is shown in figure 13, but these projects would provide many years of benefits beyond this timeframe. For example, we found that—compared to other approaches—an incremental funding approach that concentrates all available funding to one of the four projects at a time, as in Approach A, below, can reduce inefficiency. To validate our findings, we solicited feedback from Corps officials from the Pittsburgh District, Pennsylvania and Rock Island District, Illinois— based on their past and current experience with inland-waterways construction projects—from the Corps’ Cost Estimating Center of Expertise in Walla Walla, Washington; and representatives from the Waterways Council, Inc. to understand the perspectives of industry stakeholders. They all generally agreed that our assumptions, approaches, and results were reasonable. Appendix IV: Objectives, Scope, and Methodology In this report, we (1) assess how the Corps allocates funds for operations and maintenance projects for the inland waterways system; (2) describe how the Corps prioritizes and funds construction projects, and assess the effect of the current-funding approach on projects’ costs and schedules; and (3) present stakeholder opinions on proposed options to alter the funding and management of inland waterways and any associated limitations or trade-offs. The scope of our review includes Corps activities related to managing commercial navigation—including operations, maintenance, and construction—on the 27 inland waterways subject to the inland waterways diesel fuel tax. The fuel-taxed inland waterways system is made up of the navigable waterways of the Mississippi River and its tributaries, the Ohio River basin, the Gulf and Atlantic Intracoastal Waterways, and the Columbia-Snake Rivers, among others (see app. I for a list of fuel-taxed inland waterways). Commercial navigation activities are those that facilitate the movement of traffic along the waterways for commercial purposes, such as the transportation of goods for sale. For contextual information on operations, maintenance, and construction spending, we analyzed Corps financial data on obligations for operations and maintenance for inland-waterways navigation projects for fiscal years 2006 through 2017 (the only years for which data were available) and allocations for construction and major rehabilitation of locks and dams for fiscal years 1997 through 2018 from the Corps of Engineers Financial Management System. To determine the reliability of this data for the purposes of this report, we reviewed the data to identify obvious errors and missing data and interviewed appropriate Corps officials about related internal controls and procedures and the limitations of the data. We found these data to be sufficiently reliable for the purpose of providing contextual information about funding for inland waterways operations and maintenance and construction over time. With regard to all of our reporting objectives, we interviewed a range of Corps officials at the headquarters, division, and district levels, as well as national and regional stakeholders. We interviewed district officials from a non-generalizable sample of 6 of the 24 Corps districts that manage fuel-taxed waterways within their district boundaries; we selected the districts to include a variety of geographic regions, waterway characteristics, primary commodities shipped, and history of construction projects funded through the Trust Fund. Based on these criteria, we selected the Corps districts in Little Rock, Arkansas; Mobile, Alabama; New Orleans, Louisiana; Pittsburgh, Pennsylvania; Rock Island, Illinois; and Walla Walla, Washington. In addition, we interviewed officials from the Corps’ Northwestern Division office, which oversees the Walla Walla District, to understand the division-level role in coordinating districts’ inland-waterways infrastructure projects. We also conducted a total of 42 semi-structured interviews with waterways stakeholders representing 43 different regional and national entities including commercial, recreational, and other waterway users and 12 researchers (academics, economists, and engineers) for a total of 55 stakeholders. National stakeholders were identified by reviewing related literature and our prior reports and recommendations from the Transportation Research Board and the Waterways Council, Inc. (an industry organization representing a range of waterway users including shippers, ports, energy providers, waterways operators, and other advocacy groups). Regional stakeholders in the six selected districts were identified through recommendations from agencies and national waterways stakeholder organizations to represent a mix of commercial users (such as barge companies and shippers with commercial interests in the U.S. inland waterways system); recreational users; and industrial water users (such as municipal water authorities and hydropower entities). From those stakeholders identified, we selected entities to interview to achieve diversity of waterway users’ perspectives and conducted interviews with both individual entities as well as associations representing a variety of users and companies. In addition to waterways’ users, we also interviewed stakeholders who have conducted research regarding the management of and allocation of funding for fuel-taxed waterways, selected based on their contributions to the relevant literature on options for funding and managing inland waterways, including academics, economists, and engineers who were knowledgeable about a range of topics including commodities transportation (agricultural, energy products, and other materials), engineering, and water resources. See appendix II for a list of entities represented among the stakeholders we interviewed. We asked agency officials and stakeholders open-ended questions and did not conduct a survey in which a response was provided irrespective of whether a certain issue was relevant to the interviewee, so not every topic was brought up or discussed by every interviewee. We analyzed the responses to identify common themes and the range of opinions that arose in interviews, which we have reported on. To identify these themes and summarize the opinions of agency officials and stakeholders, potential themes were identified via review of a sample of interviews. Two analysts then conducted a content analysis to identify the themes discussed in each interview and categorize the opinions of the interviewees. For each interview, one analyst independently reviewed the record of interview, and the other analyst later verified that coding. If there was disagreement, the analysis discussed their assessment and came to a final determination on the categorization. Because we selected a non- generalizable sample of stakeholders, their responses should not be used to make inferences about a population. To characterize stakeholders’ views throughout this report, we defined modifiers (e.g., “some”) to quantify stakeholders as follows: “some” stakeholders represents stakeholders in 3 to 14 of the 42 interviews “many” stakeholders represents stakeholders in 15 or more of the 42 interviews. To examine how the Corps allocates funds for operations and maintenance projects for the inland waterways system, we examined the President’s budget request for civil works and appropriations for fiscal years 1997 through 2018 as well as the Corps’ budget request development guidance to understand how the Corps develops its budget request and prioritizes operations and maintenance projects. We conducted site visits to Mobile, Alabama; New Orleans, Louisiana; and Pittsburgh, Pennsylvania, to interview Corps officials and various regional stakeholder groups in person, and to observe the condition of waterway infrastructure. We also interviewed officials from the Office of the Assistant Secretary of the Army for Civil Works (ASA-CW), the Office of Management and Budget (OMB), the Department of Transportation’s Maritime Administration, and the Department of Homeland Security’s U.S. Coast Guard to understand how the Corps coordinates with other agencies to fulfill its inland-waterways navigation mission. To assess the Corps’ efforts related to deferred maintenance we interviewed Corps officials about how the Corps measures and defines deferred maintenance and compared these practices with federal internal-control standards related to control activities and quality information. To describe how the Corps prioritizes and funds inland-waterways construction projects and to examine the effect of the current funding approach on projects’ costs and schedules, we reviewed relevant statutes, agency policies and guidance, the Corps’ capital-investment strategy documents prepared in conjunction with the Inland Waterways Users Board, as well as the Corps’ Civil Works budget justification documents in support of President’s budget requests, congressional appropriations, and accompanying conference reports. We also reviewed relevant Corps documents, such as reports on ongoing construction projects and studies on construction cost increases; prior GAO reports; OMB capital funding guidance; and other academic studies to gather information on capital project funding approaches, including for inland waterways projects. We analyzed data from the Corps of Engineers Financial Management System to identify sources of funding for inland- waterways construction projects from fiscal years 1996 through 2018. As discussed above, we found these data sufficiently reliable for the purposes of providing contextual information about the Corps’ funding sources. In addition to interviewing Corps officials and stakeholders, as described above, we also interviewed officials from the office of the ASA- CW and OMB for their views regarding the prioritization and funding processes for inland waterways-infrastructure projects, and the roles their organizations play in those processes. We compared the established method of funding inland-waterways construction projects with federal internal-control standards, OMB guidance, and prior GAO work related to funding capital projects. To illustrate the effects of the current funding approach on costs and schedules for inland-waterways construction projects, we developed a simulation of the effects of various funding approaches on the total funding requirements for a set of hypothetical construction projects. The simulation incorporates assumptions regarding the amount of total funding a project would require (including any cost overruns) due to the pattern and timing of funding made available. Our assumptions were anchored in findings from a 2008 Corps study on factors contributing to cost increases for three inland-waterways construction projects, and Corps officials and other industry stakeholders generally agreed that our assumptions and results were reasonable. Additional information on our methodology for developing this simulation and the full results are included in appendix III. Finally, to identify proposed options to alter the funding and management of inland waterways, we conducted a literature search—including scholarly/peer-reviewed journals, government reports, congressional hearings’ transcripts, and associations’ and think tanks’ publications—to identify relevant studies and proposals about inland waterways’ financing in the United States, published between 2007 and 2017. Through our literature search, we reviewed the abstracts for 103 potentially relevant studies and identified 24 for further review. For each of these 24 studies, we reviewed the entire study and determined 13 studies were relevant. We then reviewed these 13 studies to identify the options most commonly discussed or proposed. For the purposes of this report, we have divided those options into broad categories: altering the cost sharing between the Trust Fund and federal requiring other users and beneficiaries of the waterways to contribute to the Trust Fund, increasing or adding fees for commercial users, expanding opportunities for local sponsors to contribute to funding pursuing alternative-financing arrangements. In addition, we reviewed proposals by recent administrations, including the fiscal year 2018 President’s budget request, and interviewed Corps officials and other entities including the Transportation Research Board and district and agency stakeholders selected as described above to ensure we had identified the most relevant options. During interviews with stakeholders (as discussed above) we asked about their general views on the potential benefits limitations, and trade-offs of those options. See appendix II for a list of the stakeholders we interviewed. We conducted this performance audit from June 2017 through November 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix V: Comments from the Department of Defense Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Andrew Von Ah, (202) 512-2834 or Vonaha@gao.gov. Staff Acknowledgments In addition to the contact named above, the following individuals made important contributions to this report: Susan Zimmerman, Assistant Director; Katie Hamer, Analyst-In-Charge; Amy Abramowitz; Faisal Amin; Krister Friday; Carol Henn; Hannah Laufe; Sara Ann Moessbauer; Josh Ormond; Cheryl Peterson; Amy Rosewarne; Alexandra Rouse; Lisa Shibata; and Pamela Snedden.
Why GAO Did This Study The Corps is primarily responsible for operating and maintaining the nation's inland waterways, including maintaining locks and dams as well as rehabilitating, modernizing, or constructing new infrastructure as needed. Persistent schedule delays and cost overruns for inland-waterways construction projects have prompted some in Congress to explore funding and management alternatives. GAO was asked to review options to change the management of inland waterways. Among other things, this report assesses how the Corps allocates funds for operations and maintenance for the inland waterways, describes how the Corps funds construction projects, and assesses the effect of the current funding approach on projects' costs and schedules. GAO reviewed Corps documents and data; interviewed officials from Corps headquarters, six districts, and representatives of regional and national stakeholder groups—including commercial and recreational interests as well as contributors to relevant literature—selected to achieve a variety of viewpoints; and developed a simulation of the effect of various funding approaches on the total funding requirements and timelines for a set of hypothetical construction projects. What GAO Found The U.S. Army Corps of Engineers (Corps) allocates its appropriated funding for operations and maintenance projects for the inland waterways based on risk and economic benefits. However, the Corps does not know how much deferred maintenance exists for inland waterways because there is no agreed upon definition for deferred maintenance. Corps and ASA-CW officials identified several challenges related to developing a useful definition with which to measure deferred maintenance. For example, a single measure may not be useful to gauge the condition of the waterways because the effect of deferred maintenance projects on the reliability of the waterways will vary. However, without a measure or measures of deferred maintenance for inland waterways that (1) the Corps finds useful, (2) reflects its priorities, and (3) accurately conveys a consistent and well-defined measure of deferred maintenance, the Corps is limited in its ability to manage its maintenance efforts and accurately communicate its estimated maintenance costs to OMB and the Congress. With regard to inland-waterways construction projects, the Corps prioritizes them based on expected costs and benefits. The Corps assesses the net economic benefits of inland-waterways construction projects' alternatives by comparing estimated direct costs (e.g., construction costs to build a new lock chamber) to estimated reductions in the waterway transportation costs (e.g., reduced travel costs related to a reduction in the time it might take for a barge to pass through a larger lock chamber). According to Corps officials and stakeholders, the current incremental-funding approach for prioritized projects, among other things, has resulted in schedule delays (as shown below) and cost increases. Although full upfront funding for capital projects is an important tool for effective management, inland-waterways construction projects have been funded incrementally, meaning the Corps requests—and Congress appropriates—annual funding that covers a portion of a project's estimated costs. Corps reports and academic studies have found that this approach results in increased project costs because the Corps must contract for construction in separable pieces. This approach is less efficient than contracting for the entire project at once. For example, Corps officials currently expect that the Kentucky Lock Addition project will cost at least $229 million more than the originally estimated cost as a direct result of this contracting approach. Without some change in the way inland-waterways construction projects are funded to either provide full funding or reduce the effects of incremental funding by concentrating funding on fewer projects at one time, current cost increases and schedule delays resulting from inefficient contracting are likely to continue. What GAO Recommends GAO is making two recommendations: that the Corps define and measure deferred maintenance for inland waterways and that it pursue changes to increase its ability to more efficiently use available funding for construction. The Department of Defense concurred with GAO's recommendations.
gao_GAO-19-117
gao_GAO-19-117_0
Background CHIP-IN Act According to VA officials and Omaha donor group representatives, two main factors coalesced to become the impetus for the CHIP-IN Act. One factor was an Omaha donor group’s interest in constructing an ambulatory care center that could help address the needs of veterans in the area, given uncertainty about when or whether VA would be able to build a planned replacement medical center. In 2011, VA allocated $56 million for the design of the replacement medical center in Omaha, which had a total estimated cost of $560 million. However, VA officials told us that given the agency’s backlog of construction projects, the replacement medical center was not among its near-term projects. In the meantime, according to VA officials and the Omaha donor group, they discussed a change in the scope of the project— from the original plan of a replacement medical center to a smaller- scope project for a new ambulatory care center—that could potentially be constructed using the existing appropriation of $56 million plus a donation from the Omaha donor group. Another factor was the Congress’s and VA’s broader interest in testing innovative approaches to meeting VA’s infrastructure needs. According to VA officials, the agency was interested in constructing medical facilities in a more expeditious manner and developing legislation that allowed private money to help address VA’s needs. The CHIP-IN Act authorized a total of five pilot projects but did not name any specific project locations. Subsequently, the Omaha donor group applied to participate in the pilot program—with the construction of an ambulatory care center—and VA executed a donation agreement in April 2017. VA may accept up to four more real property donations under the pilot program, which is authorized through 2021. The CHIP-IN Act places certain requirements on donations under the pilot program. VA may accept CHIP-IN donations only if the property: (1) has already received appropriations for a VA facility project, or (2) has been identified as a need as part of VA’s long-range capital planning process and the location is included on the Strategic Capital Investment Planning process priority list provided in VA’s most recent budget submission to Congress. The CHIP-IN Act also requires that a formal agreement between VA and the non-federal entity provide that the entity conduct necessary environmental and historic preservation due diligence, obtain permits, and use construction standards required of VA, though the VA Secretary may permit exceptions. Omaha Project VA entered into an agreement with the Omaha donor group for the design and construction of an ambulatory care center in April 2017—4 months after enactment of the CHIP-IN Act. According to this agreement, which establishes the terms of the donation, the Omaha donor group will complete the design and construction of the facility and consult with VA. The facility will provide approximately 158,000 gross square feet of outpatient clinical functions, including primary care, an eye clinic, general purpose radiology and ambulatory surgery, specialty care, and mental health care. According to VA officials, planning for the facility began in April 2017, after the donation agreement was executed, and the project broke ground in April 2018. This donation agreement includes the mutually agreed- upon design and construction standards, which incorporate both VA’s standards and private sector building standards. The donation agreement also sets the terms of VA’s review of the design and construction documents and establishes escrow operations for the holding and disbursement of federal funds. Upon the Omaha donor group’s completion of the facility (scheduled for summer 2020) and VA’s acceptance, the Omaha donor group will turn the facility over to VA. The total estimated project cost is approximately $86 million. VA is contributing the $56 million that had already been appropriated for the design of the replacement medical facility. The Omaha donor group will donate the remaining approximately $30 million in private sector donations needed to build the facility. Pilot Program As shown in figure 2 and described below, VA officials told us that several offices are involved in various aspects of the CHIP-IN pilot—such as executing the Omaha project, seeking additional partnerships, and establishing the overall pilot program effort. The VA Office of Construction and Facilities Management (CFM) includes its Office of Real Property (ORP) and Office of Operations. ORP has taken a lead role in establishing the pilot program, while CFM Operations has led the execution of the Omaha project. Other VA offices that have been involved at different stages include the Office of General Counsel and the Secretary’s Center for Strategic Partnerships. Within the Veterans Health Administration (VHA), the local medical-center leadership was involved with developing the Omaha project, and the Office of Capital Asset Management, Engineering, and Support (Capital Asset Management Office) has contributed to efforts to identify additional projects. Some of these offices are involved with a steering committee created to implement the CHIP-IN Act (CHIP-IN steering committee). This steering committee met for the first time in September 2018. VA Has Not Yet Established a Framework for Effective Pilot Design for the CHIP-IN Pilot Program In 2016, we identified five leading practices for designing a well- developed and documented pilot program: articulating an assessment methodology, developing an evaluation plan, assessing scalability, and ensuring stakeholder communication. (See fig. 3.) These practices enhance the quality, credibility, and usefulness of pilot program evaluations and help ensure that time and resources are used effectively. While each of the five practices serves a purpose on its own, taken together, they form a framework for effective pilot design. VA officials have worked to communicate with relevant stakeholders, but have not yet established objectives, developed an assessment methodology and evaluation plan, or documented how they will make decisions about scalability of the pilot program. VA Has Not Established Clear Objectives In 2016, we reported that clear, measurable objectives can help ensure that appropriate evaluation data are collected from the outset of a pilot program. Measurable objectives should be defined in qualitative or quantitative terms, so that performance toward achieving the objectives can be assessed, according to federal standards for internal control. For example, broad pilot objectives should be translated into specific researchable questions that articulate what will be assessed. Establishing well-defined objectives is critical to effectively implementing the other leading practices for a pilot program’s design. Objectives are needed to develop an assessment methodology to help determine the data and information that will be collected. Objectives also inform the evaluation plan because performance of the pilot should be evaluated against these objectives. In addition, objectives are needed to assess the scalability of the pilot, to help inform decisions on whether and how to implement a new approach in a broader context (i.e., whether the approach could be replicable in other settings). Relevant VA stakeholders have not yet collectively agreed upon and documented overall objectives for the CHIP-IN pilot program, but the stakeholders said they are planning to do so. However, at the time of our review, each of the VA offices we interviewed presented various ideas of what the objectives for the pilot should be, reflecting their varied missions and roles in the CHIP-IN pilot. For example, A senior VHA official said the objectives should include (1) determining whether the CHIP-IN donation partnership approach is an effective use of VA resources and (2) defining general principles for the pilot, including a repeatable process for future CHIP-IN projects. A senior VA official who has been closely involved with the pilot said one objective should be determining how VA can partner with the private sector for future construction projects, whether through donation partnerships or other means. Officials from ORP, who have taken a lead role in establishing the pilot, told us their objectives include identifying the four additional projects authorized by the CHIP-IN Act, developing a process to undertake potential projects, and determining whether a recommendation should be made that Congress extend VA’s CHIP-IN authority beyond the 5-year pilot. ORP officials said they have written some of these objectives in an early draft of plans for the CHIP-IN steering committee, but they have also discussed other objectives that are not yet documented. While the various VA offices involved may have somewhat different interests in the pilot program, developing a set of clear, measureable objectives is an important part of a good pilot design. For example, several VA officials who are involved in the pilot told us that it would be useful for relevant internal stakeholders to collectively agree upon and document overall objectives. ORP officials told us that the newly formed CHIP-IN steering committee will discuss and formalize objectives for the pilot. However, at the time of our review, a draft of these objectives had not been developed and a timeline for developing objectives was not yet established. A discussion of objectives was planned for the steering committee’s first meeting in September but had been rescheduled for the next meeting in October 2018. VA officials told us that they did not immediately move to establish a framework for the pilot program—which would include objectives for the pilot—for various reasons. Some officials said that VA and the Omaha donor group entered into formal discussions shortly after the CHIP-IN Act was enacted, and that their focus at the time was on negotiating and then executing a donation agreement for that particular project. As such, formal efforts to establish the framework for the overall pilot effort were in initial stages at the time of our review. ORP officials also said that the enactment of the CHIP-IN Act was not anticipated at the time CFM was planning and budgeting its resources for fiscal years 2017 and 2018, so work on the pilot had to be managed within available resources, largely as an additional duty for staff. In addition, a senior VHA official said a meeting to agree upon the pilot program’s objectives was needed but had not been held yet, noting that VA has competing priorities and vacancies at the senior executive level. ORP officials said they are now following project management principles in implementing the pilot. As part of this effort, they said that they intend to develop foundational documents for review by the CHIP-IN steering committee—such as a program plan containing objectives—but they have not done so yet. Without clearly defined and agreed-upon objectives, stakeholders within VA may have different understandings of the pilot’s purpose and intended outcomes. As a result, the agency risks pursuing projects that may not contribute to what VA hopes to learn or gain from the pilot. While VA officials are planning to establish objectives as they formalize the CHIP-IN steering committee, at the time of our review these objectives had not been documented and no timeline has been established for when they would be. Without clear, measurable objectives, VA will be unable to implement other leading practices for pilot design, such as determining how to make decisions about scalability. Further, not defining objectives in the near future would ultimately affect VA’s ability to evaluate the pilot and provide information to Congress about its results. VA Has Not Developed and Documented an Assessment Methodology or Evaluation Plan We have reported that developing a clearly articulated assessment methodology and a detailed evaluation plan are leading practices for pilot design. The assessment methodology and evaluation plan should be linked to the pilot’s objectives so that evaluation results will show successes and challenges of the pilot, to help the agency draw conclusions about whether the pilot met its objectives. The assessment methodology and evaluation plan are also needed to determine scalability, because evaluation results will show whether and how the pilot can be expanded or incorporated into broader efforts. Given that several VA offices are involved in the pilot’s implementation, it is important for relevant stakeholders to be involved with defining and agreeing upon the assessment methodology and evaluation plan. VA has not yet fully developed and documented either an assessment methodology or evaluation plan for the pilot, but VA officials told us they plan to do so. For example, ORP officials said they intend to collect lessons learned and then evaluate the pilot at its end in 2021 by reviewing this information with relevant stakeholders. However, more specific details for this assessment methodology have not been defined in accordance with this leading practice. For example, we found that ORP has not yet determined which offices will contribute lessons learned, how frequently that information will be collected, or who will collect it. Similarly, details for an evaluation plan have not been defined, including who will participate in the evaluation and how information will be analyzed to evaluate the pilot’s implementation and performance. Now that the CHIP- IN steering committee has met for the first time, this group intends to discuss assessment of the pilot at a future meeting, but it is not clear when that discussion will occur, what leading practices will be considered, and when plans will be defined and documented. According to VA officials, an assessment methodology and evaluation plan have not been developed because, as discussed above, after the CHIP-IN Act was enacted, efforts were focused on negotiating the Omaha donation agreement and then executing that project. As such, formal efforts to establish the pilot through the CHIP-IN steering committee were in initial stages at the time of our review. Further, until VA has agreed- upon and documented objectives for the pilot program, it may be difficult to determine what information is needed for an assessment methodology and how the pilot will be evaluated. Unless VA establishes a clear assessment methodology that articulates responsibilities for contributing and documenting lessons learned, VA may miss opportunities to gather this information from the pilot. For example, while some stakeholders are documenting lessons learned relevant to their roles in the pilot, others are not. Specifically, ORP and CFM Operations are documenting lessons learned, but other VA offices and the Omaha donor group have not, though some told us they would be willing to share lessons learned if asked. Without an assessment methodology, there may also be confusion about who is responsible for documenting lessons learned. For example, a senior CFM official said that the Omaha donor group was compiling lessons learned from the pilot overall and would subsequently share those with VA. However, representatives from the donor group told us they have not been asked to share lessons learned with VA, but they would be willing to do so. When key individuals leave their positions—a situation that has occurred a number of times during implementation of the CHIP-IN pilot—their lessons learned may not be captured. For example, VA officials and donor group representatives told us that two VA officials who were involved in developing the pilot have since left the agency. In addition, stakeholders’ memories of lessons learned may fade unless they record them. Waiting to develop an evaluation plan—which should include details about how lessons learned will be used to measure the pilot’s performance—may ultimately affect VA’s preparedness to evaluate the pilot and provide information to Congress about its results. VA Has Not Documented Plans to Assess Scalability The purpose of a pilot is to generally inform a decision on whether and how to implement a new approach in a broader context—or in other words, whether the pilot can be scaled up or increased in size to a larger number of projects over the long term. Our prior work has found that it is important to determine how scalability will be assessed and the information needed to inform decisions about scalability. Scalability is connected to other leading practices for pilot design, as discussed above. For example, criteria to measure scalability should provide evidence that the pilot objectives have been met, and the evaluation’s results should inform scalability by showing whether and how the pilot could be expanded or how well lessons learned from the pilot can be incorporated into broader efforts. VA officials have begun to implement this leading practice by considering the pilot as a means of testing the viability of the donation partnership approach; however, plans for assessing scalability have not been fully defined and documented. A senior VA official said scalability is seen as a way to determine if the donation approach or other types of private sector partnerships are a viable way to address VA’s infrastructure needs. Similarly, ORP officials told us they are first considering scalability in terms of whether the CHIP-IN donation approach is an effective or feasible way of delivering VA projects. These officials said scalability will be largely determined by whether all five authorized projects can be executed before authorization for the CHIP-IN pilot program sunsets. For example, if VA can find four additional projects and execute donation agreements before the pilot’s authority expires, then potentially VA could seek congressional reauthorization to extend the program beyond the 5- year pilot. ORP officials are also considering scalability in terms of any changes to the program, such as incentives for donors, that could potentially increase its effectiveness. However, ORP officials explained that scalability may be limited because the types of projects that can be accomplished with the CHIP-IN donation approach may not be the projects that are most needed by VA. Along with other pilot design topics, the CHIP-IN steering committee intends to discuss scalability at a future meeting, but it is not clear when that discussion will occur. Thus, while VA officials have considered what scalability might look like, they have not fully determined and documented how to make decisions about whether the pilot is scalable. Since VA has not defined and documented the pilot’s objectives and its evaluation plans, it may be more difficult to determine how to make decisions about scalability. Considering how the pilot’s objectives and evaluation plans will inform decisions about scalability is critical to providing information about the pilot’s results. For example, at the end of the pilot, VA and Congress will need clear information to make decisions about whether the CHIP-IN donation approach could be extended beyond a pilot program, if any changes could enhance the program’s effectiveness, or if particular lessons learned could be applied to VA construction projects more broadly. Without clear information about scalability, VA may be limited in its ability to communicate quality information about the achievement of its objectives. Such communication is part of the federal standards for internal control. VA Is Making Efforts to Improve Communication with Relevant Stakeholders We have reported that appropriate two-way stakeholder communication and input should occur at all stages of the pilot, including design, implementation, data gathering, and assessment. To that end, it is critical that agencies identify who or what entities the relevant stakeholders are and communicate with them early and often. This process may include communication with external stakeholders and among internal stakeholders. Communicating quality information both externally and internally is also consistent with federal standards for internal control. VA has begun to implement this practice, with generally successful communication with the Omaha donor group. While VA has experienced some external and internal communication challenges about the pilot, officials have taken steps to help resolve some of these challenges. External communication. VA officials and representatives from the Omaha donor group generally described excellent communication between their two parties. For example, donor group representatives told us that in-person meetings helped to establish a strong relationship that has been useful in negotiating the donation agreement and executing the project to date. Further, VA officials and donor group representatives said that all relevant stakeholders—such as the donor group’s construction manager, general contractor, and architect, as well VA’s engineer, project manager, and medical center director—were included in key meetings once the Omaha project began, and said that this practice has continued during the construction phase. Although the Omaha donor group reported overall effective relations and communications with VA, donor group representatives noted that additional public relations support from VA would have been helpful. For example, after the CHIP-IN project was initiated in Omaha, the donor group encountered a public relations challenge when news reports about unauthorized waiting lists at the Omaha medical center jeopardized some donors’ willingness to contribute to the project. While donor group representatives said this challenge was addressed when the donor group hired a public relations firm, they also explained that it would be helpful for VA headquarters to provide more proactive public relations support to the local areas where future CHIP-IN projects are located. VA officials stated that they experienced some initial challenges communicating pilot requirements to external entities that are interested in CHIP-IN donation partnerships, but officials said that in response the agency has changed its outreach approach. As discussed below, the donation commitment aspect of the pilot can be a challenge. When interested entities contact VA to request information on the CHIP-IN pilot, VA officials told us they find the entities are often surprised by the donation commitment. For example, two entities that responded to VA’s RFI told us they were not clear about the donation requirement or the expected level of donation, or both. One respondent did not understand the pilot required a donation and would not provide an opportunity for a financial return on investment. Another respondent indicated that when they asked VA for clarification about the expected project’s scope, personnel from a headquarters office and the local VA medical center could not fully answer their questions. VA officials acknowledged these challenges and said they have changed their outreach efforts to focus on certain potential CHIP-IN locations, rather than RFIs aimed at a broader audience. Further, VA officials said that when speaking with potential donors going forward, they plan to involve a small group of officials who are knowledgeable about the pilot and its donation approach. Internal communication. While VA initially experienced some challenges in ensuring that all relevant internal stakeholders have been included in the pilot’s implementation, according to officials, the agency has taken recent steps to address this concern and involve appropriate internal offices. For example, officials from the Capital Asset Management Office said they could have assisted ORP in narrowing the list of potential projects in the RFIs but were not consulted. Later, after revising the marketing approach, ORP reached out to the Capital Asset Management Office and other relevant offices for help in determining priority locations for additional CHIP-IN projects, according to an ORP official. Officials from the Capital Asset Management Office told us that with improved engagement they were able to participate more actively in discussions about the pilot. In addition, initial plans for the CHIP-IN steering committee did not include VHA representation. However, in summer 2018 ORP expanded the planned steering committee to include VHA representatives, a plan that some other VA offices told us is needed to ensure that the pilot addresses the agency’s healthcare needs and that VHA offices are informed about pilot efforts. CHIP-IN Pilot Suggests That Donation Partnerships Can Improve Project Implementation, but Challenges Include Identifying Donors and Establishing Responsibilities VA and Omaha Donor Group Agree That the CHIP-IN Donation Approach and Private Sector Practices Have Improved the Omaha Project’s Implementation Based on the experience with the Omaha project, the CHIP-IN donation approach can result in potential cost and time savings—through the leveraging of private-sector funding, contracting, and construction practices—according to VA officials and the Omaha donor group. Regarding cost savings, one VA official stated that using donations makes VA’s appropriated funds available to cover other costs. In addition, based on the experience with the Omaha project, other VA officials told us that a CHIP-IN project can potentially be completed for a lower cost because of practices resulting from private sector leadership. Specifically, VA estimated that the Omaha ambulatory care center would cost about $120 million for VA to build outside of a donation partnership—as a standard federal construction project. Under the CHIP-IN pilot, however, the total estimated cost of the Omaha facility is $86 million—achieving a potential $34 million cost savings. Regarding time savings, CHIP-IN projects can potentially be completed at a faster pace because of the use of certain private sector practices and because projects can be addressed earlier than they otherwise would be, according to VA officials. The use of private-sector building practices can result in cost and time savings in a number of ways, according to VA officials and the Omaha donor group, as follows: The use of private-sector building standards contributed to cost savings for the Omaha project, according to VA officials and donor group representatives. VA and the donor group negotiated a combination of industry and VA building standards. A CFM official told us that using this approach and working with the private sector donor group encouraged the design team to think creatively about the risk assessment process and about how to meet the intent of VA’s physical security standards, but at a lower cost than if they were required to build a facility using all of VA’s building standards as written. For example, when assessing the safety and physical-security risk, the donor group and VA identified a location where two sides of the facility will not have direct exposure to the public or roadway traffic. Prohibiting exposure to roadways on two sides of the facility will mean spending less money to harden (i.e., protect) the facility against threats such as vehicular ramming. According to VA officials, using the combined standards did not compromise security on the Omaha project. Involving the general contractor early on in the design for the Omaha project, an approach VA does not typically take, contributed to both time and cost savings. VA officials told us that engaging the general contractor during the project’s design stage allowed the project to begin more quickly and was also helpful in obtaining information about costs and keeping the project within budget. However, VA officials said that depending on the project and contracting method used, it might not be possible to apply this contracting practice to VA construction projects outside of the pilot program. A private-sector design review method helped to save time. The Omaha donor group used a software package that allowed all design- document reviewers to simultaneously review design documents and then store their comments in a single place. VA officials said this approach was more efficient than VA’s typical review method and cut about 18 weeks from the project’s timeline. VA officials also said use of this software was a best practice that could be applied to VA construction projects more broadly. In addition, the donor group and VA employed fewer rounds of design reviews than VA typically uses; this streamlining also helped to save time during the design process, according to VA officials. Further, VA officials said that the CHIP-IN donation approach can allow VA to address projects more quickly because they are addressed outside of VA’s typical selection and funding process. For example, VA officials told us that because of the agency’s current major construction backlog, using the CHIP-IN donation approach allowed work on the Omaha project to begin at least 5 years sooner than if the CHIP-IN approach had not been used. The Omaha project’s priority was low relative to other potential projects, so that it was unlikely to receive additional funding for construction for several years. For example, one agency official noted that even if the project was at the top of VA’s priorities, there is a backlog of 20 major construction projects worth $5 billion ahead of it—meaning the Omaha project would probably not be addressed for at least 5 years. VA officials also told us that as they consider future CHIP-IN projects, they are looking for other projects that, like the one in Omaha, are needed, but may not be a top priority given available funding and could be moved forward with a private sector donation. In addition, use of the CHIP-IN donation approach and decision to pursue an ambulatory care center contributed to an earlier start on a project to address veterans’ needs. However, as mentioned earlier, VA officials said that future construction projects will be necessary to address some needs that were part of the original replacement medical center plan. Stakeholders Agreed That Relying on Philanthropic Donations and Identifying Donors Is a Challenge to Establishing Pilot Partnerships A main challenge to establishing pilot partnerships is the reliance on large philanthropic donations, according to VA officials, the Omaha donor group, and RFI respondents. In general, the potential donor pool may not be extensive given the size of the expected donations—in some cases tens or hundreds of millions of dollars—and the conditions under which the donations must be made. For example, as discussed earlier, VA officials said that when interested entities contact them about the pilot, they are often surprised by the donation commitment. When we spoke with two entities that responded to VA’s RFI, one told us that they “could not afford to work for free” under the pilot while another told us that developers are more likely to participate in the pilot if they see an incentive, or a return on their financial contribution. Also, VA officials told us that some potential project locations have not received any appropriations—making the projects’ implementation less appealing to potential donors. The Omaha donor group noted that a VA financial contribution at or above 50 percent of a project’s estimated cost is essential for demonstrating the agency’s commitment and for leveraging private-sector donations. To address challenges involving the philanthropic nature of the pilot, ORP officials told us that VA has tried to identify strategies or incentives that could encourage donor involvement. For example, the CHIP-IN steering committee is considering what incentives might be effective to encourage greater participation. One ORP official told us that such incentives could include potential naming opportunities (that is, authority to name items such as facility floors, wings, or the actual facility), although offering such incentives may require changes in VA’s authority. Further, because it may be difficult to secure donations for larger, more costly projects, some VA officials, donor group representatives, and one RFI respondent we spoke to suggested that VA consider developing less costly CHIP-IN projects—giving VA a better chance of serving veterans by filling gaps in service needs. Other VA officials, however, said they wanted to focus on larger projects because the pilot allows only five projects. Another challenge is that VA generally does not possess marketing and philanthropic development experience. VA officials told us that this makes the inherent challenge of finding donors more difficult. While VA officials have used the assistance of a nonprofit entity that has marketing expertise, they also said that going forward it would be helpful to have staff with relevant marketing and philanthropic development experience to assist with identifying donors. VA officials said this expertise could possibly be acquired through hiring a contractor, but funding such a hire may be difficult within their existing resources. CHIP-IN Team Lacks Documented Roles and Responsibilities and Has Limited Available Staffing As discussed above, the CHIP-IN pilot presents an uncharted approach to VA’s implementation of projects, and using CHIP-IN has aspects of an organizational transformation in property acquisition for the agency because it leverages donation partnerships and streamlines VA’s typical funding process. We have found that a key practice of organizational transformation includes a dedicated implementation team to manage the transformation process and that leading practices for cross-functional teams include clear roles and responsibilities, and committed members with relevant expertise. VA officials and Omaha donor group representatives acknowledged that a dedicated CHIP-IN team could help focus pilot implementation—and that no such team existed within the agency. ORP officials told us that the newly formed CHIP-IN steering committee would provide the necessary leadership for pilot implementation. They anticipate that a working group will be part of the committee and serve as a dedicated team for the pilot. However, as discussed below, roles and responsibilities have not been defined and staff resource decisions have not been made. Clear and documented roles and responsibilities. Several VA officials told us that responsibility for managing the overall pilot effort had not been assigned, and that they had different interpretations of which office had responsibility for leading the pilot. Some officials identified ORP as the leader, while others thought it was CFM or the Center for Strategic Partnerships. One CFM official told us that a clear definition of responsibilities is needed under the pilot along with a dedicated office or person with the ability to make decisions when an impasse across offices exists. Similarly, a senior VHA official told us that leadership roles and responsibilities for the pilot are not fully understood within the agency, which has made establishing partnerships under the pilot a challenge. For example, both VA officials and Omaha donor group representatives identified the lack of a senior-level leader for the pilot as a challenge and emphasized the need for strong pilot leadership going forward. Now that a CHIP-IN steering committee is being formed to provide pilot leadership, ORP officials intend to discuss committee members’ roles and responsibilities. This discussion was planned for the first committee meeting but was rescheduled for the next meeting in October 2018. ORP officials, however, told us that they do not expect to assign individual members’ roles and responsibilities until a future date. VA officials did not have a timeline for when committee or individual members’ roles and responsibilities would be formally documented. ORP officials said that roles and responsibilities for the pilot have not been defined because after enactment of the CHIP-IN Act, their first priority was to engage the Omaha donor group and negotiate an agreement. Later, after the Omaha project was progressing, ORP officials said they turned their attention to formalizing the pilot program and identifying additional donation partnerships. While it is important to concentrate on completion of individual projects, it is also important to plan for the overall pilot’s implementation—to help ensure that the pilot’s purpose and goals are met and in a timely manner. We have found that clarifying roles and responsibilities is an important activity in facilitating strong collaboration and building effective cross-functional teams. In addition, we have found that articulating roles and responsibilities is a powerful tool in collaboration and that it is beneficial to detail such collaborations in a formal, written document. Committed team members. Various VA offices and staff members have worked on the CHIP-IN pilot in addition to their other responsibilities, but several VA officials told us the resources currently dedicated to the pilot are insufficient. During our review, an ORP official told us that two ORP staff each spent about 4 to 6 hours per week on the pilot, as collateral duties. However, since that time, one of these two staff members has left the agency. A senior VA official told us that ORP and the Center for Strategic Partnerships could each use two to three more dedicated staff members to work solely on the pilot. While one ORP official said that additional staff would likely be assigned after other CHIP-IN projects are identified, a Center for Strategic Partnerships official said a specified percentage of staff time should be dedicated now to identifying potential donors. As mentioned above, VA officials told us they anticipate a working group will be part of the CHIP-IN steering committee and will serve as the dedicated team to implement the pilot. However, VA has not yet documented how it will staff the working group, including how it will obtain the needed expertise within its existing resources. According to one VA official, staff had not been initially dedicated to the pilot because the CHIP-IN Act did not provide resources to fund a dedicated team for the pilot, so VA has needed to implement the pilot within its existing resources. This VA official also told us that they were not certain VA could support a dedicated team with existing resources. Another official indicated that VA would need to consider how to incorporate CHIP-IN into the agency’s operations if the pilot program were expanded beyond the initial pilot and then dedicate needed resources. Dedicating a strong and stable implementation team is important to ensuring that the effort receives the focused, full-time attention needed. Team members with relevant knowledge and expertise. As previously discussed, VA officials told us that it would be helpful for a CHIP-IN team to include stakeholders with certain expertise, such as marketing and philanthropic development experience. In addition, representatives from the Omaha donor group said going forward, proactive public relations expertise is needed from VA headquarters (in particular, for external communications outside of the partnership) to quickly and positively address any incidents that could negatively impact VA’s ability to encourage donor participation in the pilot at the local level. For example, in the event of critical news reports about a local VA facility, such as what occurred in Omaha, donor group representatives said that additional public relations support would be helpful. VA officials also told us that a CHIP-IN team should be a collaborative effort across several offices. Specifically, one senior VA official said a cross-functional team with representation from ORP, CFM Operations, the Center for Strategic Partnerships, VHA, and the Office of Asset Enterprise Management (which has budget and finance expertise) would be useful in focusing and implementing the pilot. Leading practices for cross-functional teams include having members with a wide diversity of knowledge and expertise. Having a dedicated team or working group that consists of committed members with clear roles and responsibilities could assist VA in implementing the CHIP-IN pilot. For example, the working group could focus time and attention on strengthening design of the pilot program as a whole, instead of implementing projects on a piecemeal basis. Further, clearly identifying and documenting roles and responsibilities could help relevant stakeholders define and agree upon pilot objectives as well as an assessment methodology and evaluation plan. In addition, including stakeholders with relevant expertise on the dedicated team may assist VA in identifying viable projects and negotiating partnership agreements more readily. Conclusions The CHIP-IN pilot is a unique, time-limited opportunity for VA to test a new way of building needed medical facilities by using non-federal funding sources—donors—to leverage federal funds. Though the first project is still under way, stakeholders have already noted benefits of the donation partnership approach, including potential cost and time savings as well as learning about private sector practices that could be applied more broadly to VA construction. However, VA is not yet collecting the information it needs to support decisions by VA or Congress about the pilot. Without a strengthened pilot design—including measurable objectives, an assessment methodology, and an evaluation plan—that can help inform decisions about the scalability of the pilot, it may not be clear to VA and Congress whether the CHIP-IN approach could be part of a longer-term strategy or how lessons learned could enhance other VA construction efforts. While leadership for the pilot had not been previously assigned, a newly formed CHIP-IN steering committee is meant to focus on the pilot’s implementation. Defining and documenting roles and responsibilities for this committee—and identifying the resources needed to effectively implement the pilot—could assist VA in partnering with additional donors and creating new opportunities to meet the urgent needs of veterans. Recommendations for Executive Action We are making the following three recommendations to VA. The Secretary of VA should ensure that internal stakeholders—such as the CHIP-IN steering committee’s members—agree to and document clear, measurable objectives for the CHIP-IN pilot that will help inform decisions about whether and how to scale the program. (Recommendation 1) The Secretary of VA should ensure that internal stakeholders—such as the CHIP-IN steering committee’s members—develop an assessment methodology and an evaluation plan that are linked to objectives for the CHIP-IN pilot and that help inform decisions about whether and how to scale the program. (Recommendation 2) The Secretary of VA should ensure that the CHIP-IN steering committee documents the roles and responsibilities of its members and identifies available staff resources, including any additional expertise and skills that are needed to implement the CHIP-IN pilot program. (Recommendation 3) Agency Comments We provided a draft of this report to VA for comment. In its written comments, reproduced in appendix I, VA concurred with our recommendations and stated that it has begun or is planning to take actions to address them. VA also provided a general comment on the role of VHA in the CHIP-IN pilot, which we incorporated in our report. We are sending copies of this report to the appropriate congressional committees, the Secretary of Veterans Affairs, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (213) 830-1011 or vonaha@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Vetera ns Affairs Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Cathy Colwell (Assistant Director), Kate Perl (Analyst in Charge), Melissa Bodeau, Jennifer Clayborne, Peter Del Toro, Shirley Hwang, Terence Lam, Malika Rice, Crystal Wesco, and Elizabeth Wood made key contributions to this report.
Why GAO Did This Study VA has pressing infrastructure needs. The Communities Helping Invest through Property and Improvements Needed for Veterans Act of 2016 (CHIP-IN Act) authorized VA to accept donated real property—such as buildings or facility construction or improvements—through a pilot program. VA has initiated one project in Omaha, Nebraska, through a partnership with a donor group. VA can accept up to five donations through the pilot program, which is authorized through 2021. The CHIP-IN Act includes a provision for GAO to report on donation agreements. This report (1) examines the extent to which the VA's pilot design aligns with leading practices and (2) discusses what VA has learned from the pilot to date. GAO reviewed VA documents, including plans for the pilot program, and visited the Omaha pilot project. GAO interviewed VA officials, the Omaha donor group, and three non-federal entities that responded to VA's request seeking donors. GAO compared implementation of VA's pilot to leading practices for pilot design, organizational transformation, and cross-functional teams. What GAO Found The Department of Veterans Affairs (VA) is conducting a pilot program, called CHIP-IN, that allows VA to partner with non-federal entities and accept real property donations from them as a way to help address VA's infrastructure needs. Although VA signed its first project agreement under the program in April 2017, VA has not yet established a framework for effective design of the pilot program. Specifically, VA's pilot program design is not aligned with four of five leading practices for designing a well-developed and documented pilot program. VA has begun to implement one leading practice by improving its efforts to communicate with relevant stakeholders, such as including external stakeholders in key meetings. However, the VA offices involved have not agreed upon and documented clear, measurable objectives for the pilot program, which is a leading practice. Further, VA has not developed an assessment methodology or an evaluation plan that would help inform decisions about whether or how the pilot approach could be expanded. While VA officials said they intend to develop these items as tasks for the newly formed CHIP-IN steering committee, they have no timeline for doing so. Without clear objectives and assessment and evaluation plans, VA and Congress may have difficulty determining whether the pilot approach is an effective way to help address VA's infrastructure needs. To date, the CHIP-IN pilot suggests that donation partnerships could improve construction projects, but identifying donors and establishing a team for the pilot program have presented challenges. Officials from VA and the donor group for the first pilot project—an ambulatory care center in Omaha, Nebraska—said they are completing the project faster than if it had been a standard federal construction project, while achieving potential cost savings by using private sector practices. However, VA officials said it is challenging to find partners to make large donations with no financial return, and VA's lack of marketing and philanthropic development experience exacerbates that challenge. VA and the donor group agreed that a dedicated team of individuals with relevant expertise could facilitate the pilot's implementation. The new CHIP-IN steering committee could serve this purpose, but it lacks documented roles and responsibilities. Establishing a team with clear roles and responsibilities and identifying both available and needed staff resources could assist VA in partnering with additional donors and creating new opportunities to meet veterans' needs. What GAO Recommends GAO is recommending that VA: (1) establish pilot program objectives, (2) develop an assessment methodology and an evaluation plan, and (3) document roles and responsibilities and identify available and needed staff resources. VA concurred with GAO's recommendations.
gao_GAO-18-654
gao_GAO-18-654_0
Background Preventing Conflict and Seeking Stability Abroad Are U.S. Priorities The National Security Strategy released in December 2017 states that the U.S. government has a national security interest in addressing conflict and instability in fragile and failing nations. The strategy commits to strengthening nations where state weakness may foster threats such as violent extremism. The strategy also prioritizes efforts that empower reform-minded governments, people, and civil society in order to address the drivers of state fragility. In the SAR, a joint review of U.S. stabilization efforts—diplomacy, assistance, and defense— the Secretaries of State and Defense and the USAID Administrator stated that increasing stability and reducing violence in conflict-affected areas are essential to meeting U.S. national security goals. State and USAID’s joint strategic plans have identified strategic objectives to counter instability, transnational crime, and violence that threaten U.S. interests. Notably, the plan for fiscal years 2018–2022 states that the agencies will make early investments in preventing conflict, atrocities, and violent extremism before they spread. The 2018 National Defense Strategy identifies objectives to deter adversaries from aggression against U.S. interests and prevent terrorists from directing or supporting external operations against the United States and its citizens and allies overseas. Additionally, the Quadrennial Diplomacy and Development Review released in 2015 and covering 2015 to 2019 outlines the lines of effort that fall under State and USAID’s commitment to prevent and mitigate conflict. These lines of effort include countering violent extremism, strengthening U.S. and international capacity to prevent conflict, preventing atrocities, establishing frameworks for action in fragile states, strengthening partner capacity to protect civilians and restore peace, and eliminating the threat of destabilizing weapons. In the Quadrennial Defense Review released in 2014 and covering 2014–2018, DOD also asserts that “the surest way to stop potential attacks is to prevent threats from developing.” The 2014 Quadrennial Defense Review further states that tackling root drivers of conflict, including building capacity with allied and partner militaries, and sustaining a global effort to detect, disrupt, and defeat terrorist plots are part of DOD’s efforts to protect the United States. U.S. foreign policy strategies and plans identify the Middle East and Africa as strategically important regions affected by conflict and instability. In countries such as Iraq, Nigeria, and Syria, the United States is working to address drivers of conflict and stabilize areas liberated from violent extremist groups. Iraq. As we have previously reported, U.S. government efforts for the global war on terrorism in Iraq began in 2003. Since the removal of the Ba’ath regime and the construction of a new government, Iraq has experienced varying levels of political instability, sectarianism, and conflict. In December 2011, the last units of U.S. Forces–Iraq were withdrawn from that country. After their departure, the United States continued to provide assistance such as training and equipment to Iraq’s military and security forces and funding for programs to strengthen political institutions and civil society organizations and to promote economic growth in Iraq. In 2014, the Islamic State of Iraq and Syria (ISIS) emerged as a major force in Iraq, destabilizing various areas of the country according to reporting from State and USAID. As of December 2017, Iraqi forces, with support from the United States and the Global Coalition to Defeat ISIS (Coalition), had liberated the country’s territory from the control of ISIS, according to State (see fig. 1). According to a State official, although ISIS no longer holds Iraqi territory, it remains a terrorist threat. Syria. Syria’s instability is largely caused by an ongoing civil war that began with a government crackdown on antigovernment protests in March 2011. USAID has reported that the conflict has led to economic collapse, a breakdown in services and governance, and instability, which violent extremist groups, including ISIS, have sought to exploit. Millions of Syrians have become refugees or internally displaced due to this crisis, according to reporting from the United Nations High Commissioner for Refugees. In May 2012, the United States began providing nonlethal aid to Syrian opposition forces, and in September 2014, the United States began air strikes against ISIS components in Syria. In January 2015, DOD created the Syria Train and Equip program to provide assistance, including training and equipment, to vetted members of the Syrian opposition and to support efforts to counter ISIS and liberate territory from ISIS. For populations that remain in Syria, governance entities and institutions face challenges in delivering services to their communities, according to USAID. As of July 2018, DOD has reported that the Syrian Democratic Forces, with Coalition support, continued efforts to defeat ISIS in the middle Euphrates River Valley (see fig. 1 above). Additionally, the civil war between Syrian opposition forces and the Assad regime was ongoing as of July 2018, according to reporting from the United Nations. Nigeria. There are multiple sources of instability across Nigeria. The terrorist groups Boko Haram and its offshoot ISIS-West Africa have destabilized areas in northeast Nigeria and the greater Lake Chad Region leaving over 2 million people displaced and millions more dependent upon humanitarian assistance as of June 2018, according to USAID reporting. Also, in the Middle Belt and Northwest of the country, according to a State official and reporting from Search for Common Ground, there is rural violence among civilians which includes criminal attacks, banditry, cattle rustling, and long-standing intercommunal conflicts between farming and herding communities. This violence has exacerbated tensions between the populations in the north and south and among ethnic and religious groups across the country. Figure 2 shows incidents involving fatalities due to conflict and violent extremism in Nigeria from January 1, 2012 to September 8, 2018. Multiple U.S. Entities Conduct Efforts to Address Conflict Abroad The U.S. government, through federal agencies and federally funded organizations, supports numerous efforts to address instability and prevent conflicts abroad. State and USAID. These are the principal agencies conducting U.S. foreign policy and international development and humanitarian assistance. State is the Executive Branch’s lead foreign affairs agency. State leads U.S. foreign policy through diplomacy, advocacy, and assistance. USAID is the U.S. government’s lead international development and humanitarian assistance agency with a key role in U.S. efforts to ensure stability, prevent conflict, and build citizen- responsive local governance. DOD. While DOD’s primary mission is to provide combat-ready military forces to deter war and protect the United States, DOD also provides support to foreign disaster relief through humanitarian assistance and stabilization efforts across all phases of conflict and military operations, and in combat and non-combat environments. U.S. Institute of Peace (USIP). USIP is an independent national institute, founded by Congress, to promote international peace and the resolution of conflicts among the nations and peoples of the world without recourse to violence. USIP is governed by a bipartisan Board of Directors, which includes the Secretaries of State and Defense or their designees, the President or Vice President of the National Defense University, and 12 others. USIP’s primary funding comes from congressional appropriation and can be supplemented by funds from U.S. government partners. USIP staff work abroad and at its headquarters in Washington, D.C. USIP initiates its own work and enters into interagency agreements with U.S. agencies such as State, USAID, and DOD, according to USIP officials. Because USIP is not an agency within the executive branch, it is not a formal participant in interagency national security policy processes involving State, USAID, and DOD, according to State. U.S. agencies and USIP are engaged in efforts to counter violent extremism and address conflict in countries affected by instability and violent conflicts, including Iraq, Syria, and Nigeria. For example, as areas are liberated from ISIS in Iraq and Syria, the United States is working with its partners to try to consolidate gains, reduce levels of local instability, peaceably manage change, and build the capacity of local governance entities. To improve the effectiveness of these efforts, U.S. agencies have evaluated lessons from similar efforts in countries such as Afghanistan and Iraq. The SAR and assessments from the Special Inspector General for Afghanistan Reconstruction and the Special Inspector General for Iraq Reconstruction are examples of U.S. government initiatives to identify lessons learned from past U.S. efforts. Key Practices That Can Enhance Interagency Collaboration In prior work, we have identified key collaboration practices that can be used to assess collaboration at federal agencies (see fig. 3). These practices can help agencies implement actions to operate across boundaries, including fostering open lines of communication, and establish goals based on what the agencies share in common. Additionally, clarifying roles and responsibilities allows agencies to determine who will do what, organize their joint and individual efforts, and facilitate decision making. We have previously found that improving coordination and collaboration across agencies can potentially help agencies reduce or better manage fragmentation, overlap, and duplication. U.S. Agencies and USIP Conduct Various Efforts to Prevent and Mitigate Violent Conflict and Stabilize Conflict- Affected Areas Abroad State, USAID, DOD, and USIP reported that they have conducted a variety of efforts in Iraq, Nigeria, and Syria aimed at preventing and mitigating violent conflicts and stabilizing areas affected by such conflicts. In response to our request, each agency and USIP provided descriptions and goals for their specific program-level or project-level efforts in Iraq, Nigeria, and Syria (and in neighboring countries for Syria). To identify these efforts, each agency and USIP used its own terminology and definitions that were in place in fiscal year 2017. Efforts reported by State as active in fiscal year 2017. State reported that it conducted a range of ongoing conflict mitigation and stabilization efforts to address violent conflict in Iraq, Nigeria, and Syria, in fiscal year 2017. State, in addition to conducting its own efforts, reported that it sometimes conducted these efforts through grants to implementing partners or through interagency agreements with USIP. For Iraq, State reported a list of three individual efforts and four categories of other efforts as active in fiscal year 2017. These efforts included, for example, antiterrorism training and equipment for law enforcement; promotion of democratic governance and protection of basic human rights; support for religious and ethnic minority groups, internally displaced persons (IDP), and returnees; and clearance of explosive hazards. These programs were intended to help defeat ISIS and transnational terror groups, improve governance and rule of law, and promote reconciliation and the safe return of displaced Iraqis. Figure 4 depicts clearance operations for explosive remnants of war at a water treatment facility in Iraq supported by State. For Nigeria, State reported 21 efforts as active in fiscal year 2017. State supported programs to prevent and counter violent extremism though media programing, human rights training, police and law enforcement training and equipment, conflict early warning and response systems, and women’s and youth empowerment. According to State, these programs were intended to aid in the fight against Boko Haram and ISIS-West Africa by countering the radicalization process that leads individuals to violent extremism, protecting civilians from terrorist groups, and assisting the victims of Boko Haram and ISIS-West Africa and their host communities. To address crime and communal conflict in other regions of Nigeria, State reported that it conducts human rights and investigative training for Nigerian police, supports efforts to teach conflict resolution skills to youth, convenes dialogues between farmer and herder stakeholders to develop conflict resolution mechanisms, and other efforts. For Syria, State reported nine efforts as active in fiscal year 2017. State reported efforts that included providing training, equipment, and stipends to Free Syrian Police and education directorates in opposition-controlled parts of the country, and building the capacity of civil society and advocacy organizations, local councils, and civilian networks. According to State, these programs were intended to support the opposition and help counter violent extremists, such as ISIS and al Qaeda in Syria. Appendix II presents a full list of State’s reported conflict mitigation and stabilization efforts and their respective goals for Iraq, Nigeria, and Syria, active in fiscal year 2017. Efforts reported by USAID as active in fiscal year 2017. USAID reported that it conducted a range of ongoing conflict mitigation and stabilization efforts to address violent conflict in Iraq, Nigeria, and Syria, in fiscal year 2017. USAID reported that it primarily conducted these efforts through grants and contracts awarded to implementing partners. For Iraq, USAID reported one effort as active in fiscal year 2017. USAID, along with other international donors, supplies funding to the United Nations Development Program’s (UNDP) Funding Facility for Stabilization. The UNDP, at the request of the Prime Minister of Iraq, and with support from leading members of the Coalition to Degrade and Defeat the Islamic State of Iraq and the Levant (ISIL), established the Funding Facility for Stabilization in June 2015 to help rapidly stabilize newly retaken areas. The aim is to help restore confidence in the leading role of the Iraqi government in these areas and give populations a sense of progress and forward momentum. According to USAID, the Funding Facility for Stabilization supports restoration of essential services and efforts to kick-start the local economy, enabling internally displaced persons to return to their homes. For Nigeria, USAID reported five efforts as active in fiscal year 2017. USAID reported that it works through its implementing partners to conduct a variety of ongoing country-specific efforts including working with youth to develop countering violent extremism (CVE) action plans, building the capacity of civil society organizations and religious leaders, and providing education for displaced persons and host communities. According to USAID, these efforts are intended to counter violent extremism from Boko Haram and ISIS-West Africa, reduce conflict between herders and farmers, and support state and local government ownership for the continued education of internally displaced children. For Syria, USAID reported five efforts as active in fiscal year 2017. USAID reported that it supports a multidonor trust fund to restore essential services and works through an implementing partner to enable local councils’ ability to restore essential services. USAID reported that it also works through implementing partners to support democratic institutions, livelihoods, and local nongovernmental organizations. According to USAID, the intent of these programs is to enable the early recovery of areas liberated from ISIS by strengthening resistance to extremists, democratic processes, and the influence of strategic moderate actors. Figure 5 depicts a solar array installation that provides renewable energy for a drinking water pumping station in Dar’a Province, Syria, supported by a USAID essential services program. Efforts reported by DOD as active in fiscal year 2017. DOD reported that it conducted stabilization efforts to address violent conflict in Iraq and Syria, in fiscal year 2017. In Iraq, DOD reported one effort as active in fiscal year 2017. Medical Staff of the Combined Joint Forces Land Component Command— Operation Inherent Resolve provided immediate medical trauma supplies to the World Health Organization to fill a gap in medical supplies available to treat injured civilians. According to DOD, the project was coordinated with State and USAID and was funded through the Overseas Humanitarian, Disaster, and Civil Aid (OHDACA) appropriation. According to DOD, this project was intended to increase the chance of survival for civilians affected by military operations, increase civilian confidence in the government and the humanitarian assistance community, and provide access, influence, and visibility to DOD. In Syria, DOD reported eight efforts as active in fiscal year 2017. Civil Affairs personnel of Special Operations Joint Task Force—Operation Inherent Resolve provided classroom furniture and school supplies; cold weather items such as jackets, hats, gloves, socks and blankets; and in one area food, cooking fuel, construction material, and garbage removal. The projects were often managed through the local councils. According to DOD, the projects were coordinated with State and USAID and were funded through the OHDACA appropriation. Generally, according to DOD, the projects were intended to assist vulnerable populations, protect them from ISIL, and support local councils, while also providing access, visibility, and influence for DOD forces. Appendix IV presents a full list of DOD’s reported conflict stabilization efforts and their respective goals for Iraq and Syria, active in fiscal year 2017. Efforts reported by USIP as active in fiscal year 2017. Although USIP generally refers to all of its work as “conflict prevention and resolution,” USIP officials stated that all of USIP’s efforts fit under the general umbrella of conflict prevention, mitigation, and stabilization and thus reported all of USIP’s efforts abroad for Iraq, Nigeria, and Syria (and in neighboring countries for Syria) that were active in fiscal year 2017. USIP reported that it conducts its efforts in conjunction with local staff and implementing partners. According to USIP, some USIP efforts are supported through interagency agreements with U.S. agencies. For Iraq, USIP reported eight efforts as active in fiscal year 2017. USIP reported that it facilitated targeted dialogues among Iraq’s religious minorities to address security and governance challenges to reduce the likelihood of recurring violence and enable the return of IDPs. These dialogues created a monitoring framework to provide early warnings of potential violence. USIP also reported that it facilitated dialogues among Iraqis intended to prevent revenge acts of violence, facilitate the return of the internally displaced, and increase the resilience of communities to violent extremism from ISIS or others. Additionally, USIP reported that it provided both governmental and nongovernmental organizations with training in conflict management and identified influential religious leaders in specific conflict zones for future Iraqi-led mediations, dialogues, and peace and reconciliation efforts. Further, USIP reported that it conducted multiple justice and security dialogues that included police and government officials and citizens in areas affected by the aftermath of ISIS to collect and disseminate lessons learned and best practices. For Nigeria, USIP reported 14 efforts as active in fiscal year 2017. USIP reported that it conducted training programs, facilitated dialogues, established working groups, collected and shared lessons learned and best practices, and conducted in-country research and assessments involving civilian populations, nongovernmental organizations, police, and youth. The intent of these programs, according to USIP, was to reduce violent conflict and its root causes, strengthen the country’s recovery from Boko Haram, and prevent the emergence of other extremist groups in support of long-term stability. In addition, according to USIP, the institute connected U.S. policymakers with key Nigerian officials at the subnational levels who wield significant influence in Nigeria’s federal government system but with whom the United States has had limited contact. Figure 6 depicts a USIP symposium in Washington, D.C., funded by State, which included governors from states across northern Nigeria to foster key exchanges and critical discussions with leading American and international experts on the drivers of violent conflict in the region and how to resolve them. For Syria, USIP reported three efforts as active in fiscal year 2017. USIP reported that it held dialogues with interfaith and other key leaders to strengthen civil society’s engagement and coordinating role with civic, religious, and tribal leaders on conflict management and prevention. For one effort, according to USIP, it has three ongoing grants related to the Syria conflict in neighboring countries that focus on reducing tensions associated with the absorption of Syrian refugees. Appendix V presents a full list of USIP’s reported efforts and their respective goals for Iraq, Nigeria, and Syria, active in fiscal year 2017. U.S. Agencies and USIP Have Incorporated Aspects of Key Collaboration Practices for Their Conflict Prevention, Mitigation, and Stabilization Efforts but Have Not Documented Their Agreement State, USAID, DOD, and, where appropriate, USIP have incorporated aspects of key collaboration practices to coordinate their conflict prevention, mitigation, and stabilization efforts for Iraq, Nigeria, and Syria. However, the agencies have not documented their agreement on coordination for stabilization efforts in conflict-affected areas through formal written guidance and agreements that address key collaboration practices. The agencies have individually and jointly established some common outcomes for stabilization efforts in Iraq, Nigeria, and Syria. Additionally, State, USAID, DOD, and USIP have generally taken steps to bridge their organizational cultures; identify sources of leadership that facilitate coordination; establish roles and responsibilities; and include relevant participants for their conflict prevention, mitigation, and stabilization efforts in these countries. During the course of our review, State, USAID, and DOD released the SAR, which identified areas where U.S. government coordination for stabilization efforts in conflict-affected areas could be improved; however, the agencies have not documented their agreement as to how they will coordinate these efforts in formal written guidance and agreements that address key collaboration practices. Because multiple federal entities are engaged in U.S. conflict prevention, mitigation, and stabilization efforts in Iraq, Nigeria, and Syria, there is some inherent fragmentation in their efforts as well as the potential for overlap and duplication. According to key practices for enhancing interagency collaboration, articulating interagency agreement on collaborative efforts in formal documents, can strengthen those collaborative efforts and could reduce the potential for unnecessary fragmentation, overlap, and duplication. Outcomes and Accountability We previously found that establishing common outcomes can help agencies shape and define the purpose of their collaborative efforts. According to a senior State official, the classified country strategies maintained by the National Security Council (NSC) may contain common outcomes for some U.S. conflict prevention, mitigation, and stabilization efforts. However, the NSC did not respond to our requests for information regarding NSC-level country strategies for Iraq, Nigeria, and Syria. In the absence of information from the NSC, we reviewed information provided by the agencies as well as other government documents and found that outcomes for U.S. stabilization efforts in Iraq, Nigeria, and Syria have generally been established by one or more of the agencies. For example, for its stabilization efforts for Iraq, USAID reported that its outcome metric is the return of internally displaced populations to their communities. USAID also reported that it monitors progress toward this outcome using, in part, quarterly reporting from the United Nations Development Program (UNDP), the implementer for the primary mechanism through which the United States and other donor partners fund stabilization efforts in Iraq. Similarly, in the case of Nigeria, the U.S. government has established common outcomes and accountability mechanisms related to U.S. efforts to counter Boko Haram and ISIS-West Africa, which includes stabilization assistance. For example, the interagency, NSC-approved U.S. Strategy for Countering Boko Haram/ISIS-West Africa (March 2017), states that the United States seeks long-term end states under which Lake Chad Basin countries, in tandem with local authorities and international partners, are able to address specific regional and community-level conditions that are drivers of conflict and that make communities vulnerable to violent extremist groups. The National Counterterrorism Center facilitates an annual assessment of this strategy, and State, USAID, and DOD review their progress toward achieving objectives in this strategy during weekly meetings, according to State officials. For Syria, in January 2018, then-Secretary of State Tillerson identified the creation of conditions for the safe and voluntary return of Syrian refugees and internally displaced persons as one of several end states for Syria. However, agency officials reported different views regarding clarity about end states and goals for U.S. efforts in Syria. While some U.S. officials we interviewed could point to sources for U.S. strategy in Syria, other U.S. officials told us that the United States’ policy and goals for Syria were unclear. State and DOD officials indicated that the U.S. goals for Syria change in response to conditions where U.S. agencies and their partners operate. A USAID official told us that events on the ground often overtake U.S. efforts, and the complicated regional dynamics also affect U.S. policy goals. Moreover, the U.S. government has also developed Integrated Country Strategies for Iraq and Nigeria. The Integrated Country Strategies developed by U.S. embassies and missions may contain outcomes related to, but not necessarily specific to, U.S. conflict prevention, mitigation, and stabilization efforts abroad, according to a senior State official. According to State guidance, Integrated Country Strategies should articulate a common set of U.S. government goals and objectives in a country and may also outline performance indicators to measure progress toward each mission objective. The guidance further states that the development of these strategies should include coordination and collaboration among State, USAID, and other U.S. government agencies at the mission. Finally, at a global-level, State, USAID, and DOD have identified a need to improve the outcomes and accountability of U.S. stabilization efforts. Specifically, the 2018 SAR recommended that State, USAID, and DOD work with relevant U.S. embassy, State regional bureaus, DOD combatant commands, and other stakeholders to develop an outcome- based political strategy for stabilization in countries where stabilization is a high priority. The SAR notes the importance of developing an outcome-based political strategy that outlines core assumptions and achievable end states and that guides all lines of effort to ensure unity of purpose within the U.S. government. The SAR also identified a need to establish indicators to measure changes in the conflict environment and track them consistently over time and stated that doing so could facilitate more rigorous reviews by policy makers to determine whether adjustments are needed in U.S. government political strategy and objectives. State and USIP officials reported that due to USIP’s status as an independent, federally funded institute that operates outside of executive branch mechanisms, USIP is not a direct participant in processes to establish common outcomes and accountability mechanisms for U.S. government conflict prevention, mitigation, and stabilization efforts. Bridging Organizational Cultures We previously found that it is important for agencies to establish ways to operate across agency boundaries. According to State, USAID, and DOD officials, they have taken steps to bridge their different organizational cultures with regard to their conflict prevention, mitigation, and stabilization efforts for Iraq, Nigeria, and Syria. Specifically, officials said that they have developed a variety of ways to jointly operate across agency boundaries, such as through interagency groups and special coordination positions. USIP does not participate in such interagency mechanisms; however, it reported that it communicates and coordinates with State, USAID, and DOD through other means, such as through bilateral communications and interagency tabletop exercises. Interagency Groups State, USAID, and DOD have established various interagency groups to coordinate their efforts for Iraq, Nigeria, and Syria. According to State, USAID, and DOD officials, interagency working groups help agencies to reduce the potential for overlap and duplication of effort. Examples of interagency groups, by country, are described below. Iraq: A “Liberated Areas Working Group” serves as a clearinghouse and information exchange for both mission-level and headquarters- based counterparts to coordinate agencies’ post-ISIS stabilization efforts for Iraq. As another example, the Ambassador or Deputy Chief of Mission at Embassy Baghdad leads a stabilization and humanitarian assistance working group that meets biweekly and includes participation from State, USAID, and DOD. Nigeria: In 2015, State established an interagency group, headed by a retired U.S. Ambassador, that aims to ensure the coordination of U.S. government efforts to counter Boko Haram. Additionally, the U.S. mission in Nigeria has working groups that examine various issues, such as U.S. efforts to mitigate conflict in the country and address conflict issues in northeast Nigeria. Syria: Given that the U.S. agencies do not have an embassy-based presence in Syria, State, USAID, and DOD coordinate their stabilization efforts for Syria through three interagency platforms: the Southern Syria Assistance Platform (SSAP), located in Jordan; the Syria Transition Assistance Response Team (START), located in Turkey; and, according to a State official, START-Forward in northeastern Syria, which reports to START. START and SSAP personnel noted that the colocation of State and USAID personnel through these platforms has facilitated coordination between the two agencies, including information sharing. Further, a State Office of Inspector General inspection of the U.S. Embassy Ankara, Turkey, described START as a “cohesive unit” that blends State and USAID officials, and as a unique and “innovative model for diplomacy in dangerous environments.” In addition, for northeast Syria, START established four stabilization-related working groups that meet on a regular basis and include civilian and military representation. USIP does not participate in these interagency working groups. Rather, USIP reported that it coordinates on a bilateral, multilateral, and as- needed basis with State, USAID, and DOD headquarters personnel as well as with embassy personnel in the countries where USIP conducts work. USIP also reported that it convenes interagency officials through various programs and events, such as tabletop exercises and conferences. For example, in 2016, USIP convened State, USAID, and DOD, along with various nongovernmental and international organizations, to design and implement a tabletop exercise on countering violent extremism in the Lake Chad Basin. Interagency Collaboration Staff Positions State, USAID, and DOD officials reported that they also bridge their organizational cultures through staff positions that are aimed at enhancing interagency collaboration, such as liaison positions and officials who are embedded in other organizations. For example, SSAP and START each have civil-military liaisons, and agency officials said that these positions have helped to facilitate information sharing among State, USAID, and DOD. As another example, DOD officials reported that embedded State and USAID officials at U.S. Africa Command have helped to inform DOD’s perspective on stabilization in Nigeria. USIP reported that to help bridge organizational cultures and enhance cooperation with its agency partners, the institute operates an annual interagency fellows program. Through the program, USIP hosts one fellow each from State and USAID, and two military officers—one Marine lieutenant colonel and one Army lieutenant colonel—to conduct research and work alongside USIP program staff, according to USIP. Interagency Definitions of Conflict Prevention, Mitigation, and Stabilization In 2018, State, USAID, and DOD established a common definition of “stabilization.” The three agencies have not established common definitions of the terms “conflict prevention” and “conflict mitigation.” In the SAR, State, USAID, and DOD defined “stabilization” as “a political endeavor involving an integrated civilian-military process to create conditions where locally legitimate authorities and systems can peaceably manage conflict and prevent a resurgence of violence. Transitional in nature, stabilization may include efforts to establish civil security, provide access to dispute resolution, and deliver targeted basic services, and establish a foundation for the return of displaced people and longer term development.” According to USAID’s Administrator, the SAR built on lessons learned from Iraq and Syria, among other locations. The SAR states that, despite the U.S. government’s significant international experience in conducting stabilization efforts over recent decades, the U.S. government’s concept of stabilization was previously ill-defined and poorly institutionalized across government structures. The SAR also notes that the lack of standardization in defining and conducting stabilization led to repeated mistakes, inefficient spending, and poor accountability for results. During the course of our review, agency and USIP officials expressed varying views related to the feasibility of articulating a common definition for “conflict prevention” and “conflict mitigation.” For example, State and USAID officials noted that all of their agencies’ foreign assistance and diplomatic efforts could be considered conflict prevention. USAID also noted that defining the issues or problem sets associated with “conflict prevention” or “conflict mitigation” will depend, in part, on the context in which the relevant government agency engages on those issues. In addition, State’s Bureau of Conflict and Stabilization Operations opined that conflict management and mitigation is an evolving field of practice as well as an area that can encompass a very broad and multifaceted range of efforts, including diplomacy, foreign assistance, sanctions, and mobilization of international actions. Agency and USIP officials did not identify a negative effect associated with the lack of common definitions of the terms “conflict prevention” and “conflict mitigation.” Nonetheless, according to State and DOD officials, the agencies have started discussing the merits and feasibility of defining “conflict prevention.” For example, in response to our inquiry during a joint meeting of the three agencies with us in March 2018 to discuss the SAR, a senior State official noted that the three agencies were collectively exploring the feasibility of developing a standardized definition and harmonized approach for conflict prevention. In its technical comments to our draft report, State indicated that the agencies have begun to collaborate on the development of a definition for “conflict prevention.” In addition, as part of its planned structural reorganization of its headquarters bureaus, USAID is proposing the establishment of a new Bureau for Conflict Prevention and Stabilization. Leadership We previously found that it is important for agencies to identify sources of leadership for the collaborative effort. Agency and USIP officials identified sources of leadership, such as various NSC committees and special leadership positions, that facilitate coordination of the U.S. government’s conflict prevention, mitigation, and stabilization efforts for Iraq, Nigeria, and Syria. State and DOD officials reported that the NSC plays a leadership role in providing strategic direction and policy guidance on issues related to conflict prevention, mitigation, and stabilization. State and DOD officials also said that the NSC convenes interagency actors, including State, USAID, and DOD, to discuss high-level issues in these areas. State reported that the NSC Fragile States and Stabilization Policy Coordination Committee is the broadest conflict-related coordination group. State also reported that a significant degree of NSC-level coordination on conflict-related issues occurs through country- specific working groups, including the groups for Iraq, Syria, and Nigeria. The NSC-level Atrocities Prevention Board is another interagency mechanism that covers conflict-related issues. It has the primary purpose of coordinating a whole-of-government approach to prevent mass atrocities and genocide. While USIP is not a member of NSC-level groups, USIP reported that it engages with the NSC regarding national security issues on a bilateral basis. Agency officials also told us that various special diplomatic positions, such as special envoys and designated coordinators, are a source of leadership for the coordination of U.S. efforts to address conflict abroad. State and USAID officials cited the role of the Special Presidential Envoy for the Global Coalition to Counter ISIS, who reports to the Secretary of State, as a source of leadership for U.S. stabilization efforts for Iraq and Syria. State officials also cited the former U.S. Special Envoy for Syria position as a source of leadership for U.S. efforts for Syria. In 2015, the Assistant Secretary of State for African Affairs at the time appointed a retired Ambassador as Senior Coordinator on Boko Haram for the Lake Chad Basin region (which includes Nigeria), according to a State official. The Senior Coordinator on Boko Haram chairs a weekly interagency working group that includes a wide array of U.S. agency offices, including State, USAID, and DOD elements at both the headquarters and field-levels. According to DOD and State officials, the weekly meetings led by the Senior Coordinator on Boko Haram have helped U.S. agencies deconflict their efforts. According to a USIP report, the Senior Coordinator position has improved the U.S. government’s ability to align its efforts at both senior and working levels and has supported broad, interagency information sharing and coordination in the development of a common U.S. strategy to defeat Boko Haram. Agency officials also cited field-level leadership as helpful in coordinating U.S. government efforts for Iraq, Nigeria, and Syria. For example, for Nigeria, a USAID official told us that the Ambassador and the Deputy Chief of Mission at the U.S. embassy have enhanced and led interagency coordination. The Ambassador has provided input to help deconflict U.S. programming related to conflict mitigation and stabilization, according to this USAID official. For Syria, agency officials identified the leadership of START as helpful in coordinating U.S. stabilization efforts for Syria. Agency officials provided various views regarding the sufficiency of leadership mechanisms currently in place for coordinating U.S. stabilization efforts for Syria. While U.S. field-level efforts for Iraq and Nigeria are led by Ambassadors, the U.S. government’s ambassadorial position for Syria has been vacant since 2014. Some officials told us there was a lack of centralized leadership and decision-making authority for Syria, while others said that the current leadership structures were generally sufficient for the coordination of U.S. government efforts for Syria. Clarity of Roles and Responsibilities We previously found that it is important for agencies to define and agree on their respective roles and responsibilities for a collaborative effort. We found that agencies’ roles and responsibilities for conducting stabilization efforts for Iraq, Nigeria, and Syria were generally clear, and through the SAR, agencies have taken steps to clarify their stabilization roles and responsibilities at a global level. USAID officials reported that the agency has largely funded and overseen stabilization efforts for Iraq through the UNDP and local implementers. In Syria, State and USAID reported that they formed a combined team for implementing stabilization assistance, with support and equipment supplied by the U.S. military. For Nigeria, according to DOD and USAID officials, roles and responsibilities for agencies, including lead and supporting roles, have been defined for the U.S. counter Boko Haram and ISIS-West Africa effort. Through the 2018 SAR, State, USAID, and DOD recommended the clarification of their respective roles and responsibilities for conducting U.S. stabilization efforts abroad. The SAR recommended State as the overall lead federal agency for U.S. stabilization efforts, USAID as the lead implementing agency for nonsecurity U.S. stabilization assistance, and DOD as a supporting federal agency that provides security and reinforces civilian efforts where appropriate. The SAR noted that clear lines of authority between U.S. agencies would improve effectiveness, reduce duplication and confusion, enable greater accountability, and fully operationalize a whole-of-government approach. In June 2018, the Secretaries of State and Defense and the USAID Administrator approved the SAR, including its recommendations regarding proposed U.S. agency roles and responsibilities for U.S. stabilization efforts. In addition to the SAR, a 2018 DOD-sponsored study also recommended that DOD play a primarily supporting role in non-military, U.S. stabilization efforts. According to a DOD official, DOD is in the process of updating its stabilization policy to reflect DOD’s supporting role in U.S. government stabilization efforts, in accordance with the SAR. As indicated above, U.S. agencies do not distinguish their coordination of prevention and mitigation efforts as discrete areas of work; as a result, we were unable to assess specific roles and responsibilities among U.S. agencies for these areas. According to USIP, it aims to complement U.S. executive branch efforts and partner with U.S. agencies to prevent and resolve conflict in areas of interest to U.S. security. USIP reported that it convenes U.S. government and non-U.S. government entities on a variety of high-level policy issues; conducts its own research and programs; and partners with U.S. agencies to conduct research and programs abroad. State, DOD, and USAID officials said that USIP plays a valuable, unique, and helpful role given its status as an independent organization, its specialized expertise, its ability to convene interagency actors in a non-official setting, and its ability to build local relationships through a continuous, field-based presence in certain countries. For example, State officials and nongovernmental partners of USIP in Nigeria told us that USIP played a beneficial role in convening national and local Nigerian leaders for peace and reconciliation dialogues. Participants We previously found that it is important to ensure that the relevant participants have been included in the collaborative effort. U.S. government entities conducting conflict prevention, mitigation, and stabilization efforts abroad have demonstrated the key collaboration practice of ensuring the inclusion of all relevant participants. State, USAID, DOD, and other agency officials identified State, USAID, and DOD as the primary U.S. government agencies that participate in mechanisms to coordinate U.S. conflict prevention, mitigation, and stabilization efforts abroad. Agency officials conducting such efforts for Iraq, Syria, and Nigeria reported that the relevant participants—State, USAID, and DOD—are involved in the coordination of such efforts. USIP also reported that it participates in U.S. conflict prevention, mitigation, and stabilization efforts through a variety of means. At the headquarters-level, USIP officials told us that they conduct both regular and as-needed consultations and discussions with senior agency officials at the NSC, State, USAID, DOD, and other agencies. USIP and State officials also indicated that they coordinate their Iraq, Nigeria, and Syria programs that are funded by State through interagency agreements. USIP officials said that it is in communication with the embassies where USIP has a USIP office or ground presence. For Iraq, State and USIP officials located in-country said that they contact one another as needed. According to USIP, in March 2018, it reestablished an American country manager position in Baghdad, Iraq, whose responsibilities include regular communication and coordination with relevant U.S. government officials. For Nigeria, USAID and USIP officials said that USIP participates in a peace and security network that brings together international nongovernmental organizations and governmental actors—including USAID—to share information on peace and security efforts being conducted in Nigeria. Written Guidance and Agreements We previously found that agencies that articulate their agreements in formal documents can strengthen their commitment to working collaboratively. We found that U.S. agencies and USIP have documented some aspects of how they coordinate their conflict prevention, mitigation, and stabilization efforts in Iraq, Nigeria, and Syria. However, State, USAID, and DOD have not documented their agreement from the SAR on how they will coordinate their global stabilization efforts in conflict-affected areas, such as their agreements on common outcomes and accountability and their roles and responsibilities for conducting U.S. stabilization efforts. Specifically, we found that U.S. agencies and USIP have documented some aspects of how they coordinate their conflict prevention, mitigation, and stabilization efforts in Iraq, Nigeria, and Syria. Notably, USIP provided us with examples of its written agreements with U.S. agencies for which USIP implements conflict prevention and mitigation programming with agency funding. USIP has written agreements with USAID and various State bureaus for programs implemented in Iraq, Nigeria, and Syria. According to USIP officials in Nigeria, USIP and State coordinated the planning and implementation of their efforts during the course of these interagency agreements. In June 2018, State publically announced that the Secretaries of State and Defense and the USAID Administrator approved the SAR’s recommendations regarding U.S. stabilization efforts, such as the SAR’s recommendations to establish outcomes and accountability mechanisms and to formally define agencies’ stabilization roles and responsibilities. According to the SAR, while the principles for effective stabilization, such as clarified and formally defined roles and responsibilities, have been widely studied, they have not been systematically applied and institutionalized. According to key practices for enhancing interagency collaboration, articulating agreements in formal documents can strengthen collaborative efforts, and reduce the potential for fragmentation, overlap, and duplication. However, the SAR remains a “framework” that, according to State, has yet to be translated into agency policy and practice, and State, USAID, and DOD have not yet developed a plan to implement the SAR recommendations. State, USAID, and DOD officials acknowledged the importance of codifying their agreement on the collaboration elements raised in the SAR but said that they had not yet decided on a specific document or documents for doing so. For example, officials discussed the idea of establishing an interagency memorandum among the three agencies to codify their specific roles and responsibilities for conducting stabilization efforts, but they indicated that next steps will depend on various factors, such as decisions with regard to State’s and USAID’s ongoing organizational redesign processes. Agency officials also indicated that they are considering implementing the SAR’s recommendations through issuing written, internal guidance within each agency. We have previously found that written guidance, such as an implementation plan or memorandum of agreements, can help agencies during times of transition when leadership changes and there is a need for continuity. By formally documenting agreements according to key leading practices, the agencies will be better positioned to strengthen their collaborative efforts, and reduce any potential for fragmentation, overlap, and duplication. Conclusions In the National Security Strategy issued in December 2017, the United States emphasized the need to integrate all instruments of the United States’ national power in order to deter conflict and secure peace. State, USAID, DOD, and USIP work individually and jointly to prevent and mitigate conflict and stabilize conflict-affected areas. Although the three agencies have incorporated aspects of key practices in the coordination of their conflict prevention, mitigation, and stabilization efforts in Iraq, Nigeria, and Syria, they have not fully demonstrated the key practice of documenting agreements in written guidance. By articulating their agreement in formal documents, such as a memorandum of agreement or an implementation plan, these agencies can strengthen their coordination of U.S. stabilization efforts. Recommendations for Executive Action We are making a total of three recommendations, one each to State, USAID, and DOD. Specifically: The Secretary of State, in collaboration with the Administrator of the U.S. Agency for International Development and the Secretary of Defense, should document their agreement on coordination for U.S. stabilization efforts through formal written guidance and agreements that address key collaboration practices such as defining outcomes and accountability and clarifying roles and responsibilities for U.S. stabilization efforts. (Recommendation 1) The Administrator of the U.S. Agency for International Development, in collaboration with the Secretaries of Defense and State, should document their agreement on coordination for U.S. stabilization efforts through formal written guidance and agreements that address key collaboration practices such as defining outcomes and clarifying roles and responsibilities for U.S. stabilization efforts. (Recommendation 2) The Secretary of Defense, in collaboration with the Administrator of the U.S. Agency for International Development and the Secretary of State, should document their agreement on coordination for U.S. stabilization efforts through formal written guidance and agreements that address key collaboration practices such as defining outcomes and accountability and clarifying roles and responsibilities for U.S. stabilization efforts. (Recommendation 3) Agency and USIP Comments We provided a draft of this report to State, USAID, and DOD for comment. State, USAID, and DOD concurred with the recommendations and provided comments, which are reproduced in appendixes VI through VIII, respectively. State, USAID, and DOD also provided technical comments, which we incorporated as appropriate. We also provided a draft of this report to USIP for comment. USIP’s comments are reproduced in appendix IX. USIP also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of State, the Administrator of USAID, the Secretary of Defense, the President of USIP, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or FarbJ@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix X. Appendix I: Objectives, Scope, and Methodology This report (1) describes examples of conflict prevention, mitigation, and stabilization efforts that U.S. agencies and the U.S. Institute of Peace (USIP) conducted in Iraq, Nigeria, and Syria and their goals in fiscal year 2017 and (2) examines the extent to which U.S. agencies and USIP incorporated key collaboration practices to coordinate their efforts. To address both objectives, we reviewed the conflict prevention, mitigation, and stabilization efforts of the Departments of State (State) and Defense (DOD), the U.S. Agency for International Development (USAID), and USIP. We reviewed program, coordination, strategy, and planning documentation and interviewed State, USAID, DOD, and USIP officials at headquarters and in the field with regard to specific efforts in Iraq, Nigeria, and Syria. We conducted work in Washington, D.C.; Iraq; Nigeria; and Jordan and held teleconferences with officials in Syria, Turkey, and Kuwait. At the posts, we interviewed U.S. embassy leadership, agency program officers, and implementing partners, where available. We focused on Iraq, Nigeria, and Syria based on several criteria, including U.S. national security interests, countries with ongoing conflict, countries where all three agencies and USIP initially reported that they conducted relevant efforts in fiscal year 2017, prior GAO reporting, and input from agencies and USIP. We cannot generalize our findings from these three countries to the other countries where these agencies have conflict prevention, mitigation, and stabilization efforts. Specifically, we interviewed officials at the following entities. State officials in the Bureau of African Affairs; Bureau of Conflict and Stabilization Operations; Bureau of Democracy, Human Rights, and Labor; Bureau of International Narcotics and Law Enforcement; Bureau of Near Eastern Affairs; Bureau of Political-Military Affairs; Bureau of Public Affairs; Office of the Inspector General; Office of the Special Presidential Envoy for the Global Coalition to Defeat ISIS (the Islamic State of Iraq and Syria); and the Office of U.S. Foreign Assistance Resources; USAID officials in the Bureau for Africa; Bureau for Democracy, Conflict, and Humanitarian Assistance; and Bureau for the Middle East; DOD officials in the Office of the Secretary of Defense, Office of the Joint Chiefs of Staff, U.S. Africa Command, and U.S. Central Command; and USIP officials in the Middle East and Africa Center and the Policy, Learning, and Strategy Center. To describe examples of conflict prevention, mitigation, and stabilization efforts that U.S. agencies and USIP conducted in Iraq, Nigeria, and Syria and their goals in fiscal year 2017, we collected, synthesized, and summarized information from State, USAID, DOD, and USIP. First, we obtained the definitions of conflict prevention, mitigation, and stabilization from each entity to the extent each entity used and defined these terms. Based on our discussions with each agency and USIP, we determined that we could not use one common definition, as each agency and USIP defined these terms based on its programs and the context of its operations; thus, we would have had to use overlapping terms and definitions to capture their efforts for fiscal year 2017. State and USAID used the term “conflict mitigation and stabilization” and defined their efforts as foreign assistance programs that reduce the threat or impact of violent conflict and promote the peaceful resolution of differences, mitigate violence if it has already broken out, establish a framework for peace and reconciliation, and provide for the transition from conflict to post-conflict environments. DOD used the term “stabilization” and defined it as “an integrated civilian and military process applied in designated fragile and conflict affected areas outside the United States to establish civil security, address drivers of instability, and create conditions for sustainable stability—a condition characterized by local political systems that can peaceably manage conflict and change; effective and accountable institutions that can provide essential services; and societies that respect fundamental human rights and the rule of law.” USIP generally referred to its work as conflict prevention and resolution, which may include conflict prevention, mitigation, and stabilization efforts. USIP did not have current definitions for these terms in fiscal year 2017. USIP officials stated that all of USIP’s efforts would fit under the general umbrella of conflict prevention, mitigation, and stabilization and reported all of USIP’s efforts abroad for Iraq, Nigeria, and Syria (and in neighboring countries for Syria) that were active in fiscal year 2017. Second, to collect the data describing the efforts and their goals from each agency and USIP, we developed a standardized data collection instrument. We defined “efforts” as any program, initiative, or other similar level of engagement and also accepted projects and activities when reported. We had each agency and USIP use its own terms, definitions, and categorizations of efforts in this instrument. Based on our discussions with the agencies and USIP, we determined that this would still allow us to collect a comprehensive set of programs from each entity and to learn about their key efforts in this domain. However, we recognize that some entities might have included programs that other entities would not have included, even though both entities’ programs may have had many similarities, because of the entities’ differing definitions and terms. To ensure that our report could be made publically available, we also accepted reported categories of programs if listing each program separately would have meant including controlled unclassified information (sensitive but unclassified) . Within the data collection instrument, we asked agencies to report efforts by country, specifically, for Iraq, Nigeria, and Syria. To corroborate entries in the instrument, we requested that the agencies and USIP also provide one document or website link supporting each entry. Not all agencies fully complied with this request. In some cases, we conducted web searches for any publicly available supporting information. Third, we reviewed the reported data and supporting documents and obtained clarification from agency officials where needed. We synthesized and summarized information for each effort in this report’s appendixes and, at a higher level, in the body of the report. We requested technical comments on our summarized information from the agencies and USIP, and incorporated their suggestions as appropriate. We did not independently verify whether the reported lists of conflict prevention, mitigation, and stabilization efforts included all such efforts in Iraq, Nigeria, and Syria (and in neighboring countries for Syria). To examine the extent to which U.S. agencies and USIP incorporated key collaboration practices to coordinate their conflict prevention, mitigation, and stabilization efforts, we analyzed information about State, USAID, DOD, and USIP’s coordination using six of the seven key practices for implementing interagency collaborative mechanisms that we have previously identified and that were applicable to our review. We assessed coordination of agency and USIP efforts for conflict prevention, mitigation, and stabilization as a whole because, as indicated above, the agencies did not always distinguish their coordination efforts to address conflict using the same terms or categorization of efforts. Where information was available, we assessed whether the agencies and USIP had generally incorporated or not incorporated the six selected key practices to coordinate their efforts between State, USAID, DOD, and USIP at the headquarters level and for our selected countries of Iraq, Nigeria, and Syria. To make this determination, we examined agency and USIP documents and conducted interviews about interagency collaboration activities with officials from State, USAID, DOD, and USIP. We reviewed agency reports; jointly developed and independently developed strategies; interagency agreements; monitoring reports; and public statements by senior U.S. government officials, among other documents. We also reviewed agency and third-party reports that assessed interagency collaboration, among other issues, though it was beyond the scope of this review to assess the methodology or underlying data in these reports. During the course of our work, State, USAID, and DOD released the 2018 Stabilization Assistance Review: A Framework for Maximizing the Effectiveness of U.S. Government Efforts to Stabilize Conflict-Affected Areas. This report assessed U.S. stabilization assistance globally in conflict-affected areas. We reviewed the contents of the report and interviewed agency officials associated with this review to better understand their findings as may be related to the key collaboration practices applicable to our review. Although the National Security Council (NSC) is responsible for coordination of security-related activities and functions of the executive departments and agencies, the NSC did not respond to our request for documents and interviews. We mitigated this limitation by interviewing officials at the three agencies and reviewing other available documentation including the U.S. Strategy for Countering Boko Haram/ISIS-West Africa and the U.S. Strategy to Counter the Islamic State of Iraq and the Levant. During our visit to the U.S. embassy in Nigeria, we observed meetings for two interagency working groups. We also interviewed implementing partners for U.S. government and USIP efforts in Iraq, Jordan, and Nigeria. We used our analysis of agency and USIP documents and the results of our interviews with officials to assess collaboration practices among State, USAID, DOD, and USIP. To aid in our analysis of coordination from our review of documents and interviews, we used the information obtained under the first objective and compared State, DOD, USAID, and USIP descriptions of each of their efforts in Iraq, Nigeria, and Syria to assess for any unnecessary duplication. As discussed above, some entities may have included efforts that other entities would not have included based on their definitions for the terms in our scope. As a result, our analysis only includes the list of programs provided by the agencies to assess for duplication. We conducted this performance audit from April 2017 to September 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: State Reported Conflict Mitigation and Stabilization Efforts for Iraq, Nigeria, and Syria, Fiscal Year 2017 Conflict mitigation and stabilization effort IRAQ Anti-Terrorism Assistance Program (ATA) The Department of State’s (State) ATA Program is managed by the Bureau of Counterterrorism and implemented by the Bureau of Diplomatic Security. The ATA program trains and equips selected Iraqi law enforcement agencies to counter improvised explosive devices, respond to critical incidents, and conduct terrorism related investigations. ATA funds support training courses, consultations, associated equipment deliveries, and training support costs in Iraq and other selected third-country training locations. ATA provides the antiterrorism training and equipment to help Iraqi law enforcement agencies deal effectively with security challenges within their borders, to defend against threats to national and regional stability, and to deter terrorist operations across borders and regions. ATA assists efforts to defeat the Islamic State of Iraq and Syria (ISIS) and counter transnational terror groups and organizations by curtailing the transit of foreign terrorist fighters throughout the country and mitigating the effects of terrorist incidents. State’s Bureau of Democracy, Human Rights, and Labor (DRL) conducts Good Governance Programs in Iraq through grants to implementing partners. These programs aim to advance the equitable representation of religious and ethnic minority groups and internally displaced persons (IDP), women, and other populations marginalized in governance structures. The programs are also intended to promote equitable access to resources and services and support reform efforts on key issues of human rights and democratic governance. Programming engages civil society to develop and implement key democratic reform processes and institutions in both the central government and the Kurdistan Regional Government. The goals of Good Governance Programs in Iraq are to strengthen citizen-responsive governance, security, and rule of law to prevent instability, violence, or other crises through collaboration with Iraqi partner institutions on activities that combat corruption and strengthen governance. State’s DRL conducts Political Competition and Consensus Building Programs in Iraq through grants to implementing partners. Capitalizing on political openings created through national and provincial elections, these programs intend to work with newly elected officials and parties to strengthen their ability to equitably represent the needs of their constituents, with a particular focus on outreach to minorities and marginalized populations. One publicly competed grant will support avenues for citizens to negotiate disputes and debate policy priorities through peaceful, democratic methods, and will work to ease tensions between the central government and the Kurdistan Regional Government. The overall goal of these programs is to build the capacity of the government of Iraq to take the lead in strengthening citizen-responsive governance, security, and rule of law to prevent further instability and violence. DRL programing intends to help the government of Iraq become more inclusive, transparent, and responsive with increased participation by women, youth, and religious and ethnic minorities. Conflict mitigation and stabilization effort DRL Rule of Law Programs State’s description of effort and its goals State’s DRL conducts Rule of Law Programs in Iraq through grants to implementing partners. These programs are intended to promote reconciliation initiatives, including efforts to counter violent extremism; reintegrate returning IDPs, survivors, and their families; rehabilitate men and boys affected by the conflict; reconstitute and protect minority communities—in support of the global religious minorities earmark; and support civil society to promote accountability and transparency. More specifically, these efforts aim to (1) strengthen civil society’s ability to monitor the status of detainees and advocate for fair treatment, anti-torture, and due process; promote protection of basic human rights and democratic principles; and provide psychosocial support for trauma survivors; (2) increase accountability for human rights violations, including those associated with the current crisis, with a particular focus on the most vulnerable Iraqis, including religious and ethnic minorities, and women and children; and (3) support efforts to advocate for the rights and protections of women, girls, IDPs, victims of war— including Marla Ruzicka Iraqi War Victims Fund beneficiaries—and other marginalized groups. State’s DRL conducts Social and Economic Services and Protections for Vulnerable Populations Programs in Iraq through grants to implementing partners. Programs may include livelihood and vocational training; small and medium enterprise creation and support; psychosocial and legal aid services; compensation for war victims/reparations; and other efforts to support the rehabilitation of victims of conflict that are not reached through current assistance. These programs aim to address the post-conflict vulnerabilities of disproportionately affected marginalized populations that are often targeted by transnational terror groups and organizations to spread radicalization. The particular emphasis is on widows, single female-headed households, vulnerable youth, religious minorities in support of the global earmark, and victims of torture and war— including Marla Ruzicka Iraqi War Victims beneficiaries. State’s Bureau of Political and Military Affairs supports Explosive Remnants of War (ERW) Clearance efforts in response to recent activities of ISIS in Iraq that have dramatically altered the Conventional Weapons Destruction landscape. ISIS used mass-produced, technologically advanced improvised explosive devices (IED) to defend captured territory and target Iraqi Security Forces, as well as to booby trap homes, public spaces, farm land, and infrastructure to discourage the return of IDPs. As IDPs return to their communities, these devices continue to perpetuate ISIS’s reign of terror by indiscriminately killing civilians and impeding stabilization operations. This program, which State conducts through implementing partners, supports the urgent survey and clearance of explosive hazards from critical infrastructure associated with the delivery of clean water, electricity, healthcare, education, and transportation, as well as other sites in areas of Iraq liberated from ISIS to facilitate follow-on stabilization projects, the restoration of basic community services, and the return of IDPs. This program also supports the survey and clearance of ERW in areas impacted by legacy contamination in Iraq’s North and South. The overall goal is to assist efforts to defeat ISIS and help the government of Iraq support the safe return of Iraqis that were displaced from their homes by ISIS or liberation campaigns. Conflict mitigation and stabilization effort Mine Risk Education State’s description of effort and its goals State’s Bureau of Political and Military Affairs conducts the Mine Risk Education and Victims’ Assistance programs in Iraq through grants to implementing partners. The risk education program teaches men, women, and children across Iraq about the dangers posed by explosive hazards. This program focuses on IDPs who will be returning to areas liberated from ISIS as well as communities who have already returned to liberated areas. The program also provides risk education to people in North and South Iraq who live and work near legacy ERW contamination. The goal of this program is to strengthen citizen-responsive governance and security to prevent further instability and violence as well as to bolster human security. State’s Bureau of International Narcotics and Law Enforcement, Office of Africa and Middle East Programs, is responsible for the Advance Human rights Training for Law Enforcement Officers effort. It provides advanced human rights training to Nigerian Police Force officers deploying to the northeast and to trainers from the force’s academies and colleges (a train-the-trainer focus). The goal of the effort is to increase the Nigerian Police Force’s capacity to better prevent, detect, respond to, and investigate crime while protecting the rights of all citizens. Arewa 24—Hausa Language Media Platform State’s Bureau of African Affairs, Office of Security Affairs, was responsible for supporting Arewa 24—Hausa Language Media Platform. Arewa 24 is a free-to-air satellite TV channel and trans-media platform based in Kano, Nigeria. Positive narratives intended to help counter violent extremism were inserted into general entertainment programming aimed at young Hausa speakers in Northern Nigeria. Arewa 24 contributed to a sustainable ecosystem of indigenous capacity to create, develop, produce, and disseminate countering violent extremism (CVE) programming. State supported this effort through grants to an implementing partner. State’s Bureau of Counterterrorism also managed separate awards in support of this program. This effort was a Trans-Sahara Counterterrorism Partnership (TSCTP) project, and the U.S. Embassy Abuja Public Affairs Section also supported it. The goals of the effort were to (1) sustain broadcast quality of credible, effective, and entertaining CVE television programming; (2) increase the capacity of media professionals in Northern Nigeria to produce CVE programming; (3) expand the reach of Arewa 24’s messaging in Nigeria through agreements and arrangements with other distribution channels; and (4) continue to build commercially derived revenue, paving the way to sustainability. Although all U.S. funding for this program ended on September 30, 2017, Arewa 24 remains on the air through support from private Nigerian investors. State’s Public Affairs Section at the U.S. Embassy Abuja conducts the Community Engagement of Federal Security Agents in Peace and Trustbuilding effort through a grant to an implementing partner. This project is intended to promote confidence- building measures between youth and government of Nigeria law enforcement and security personnel in Kaduna state. The goal is to improve cooperation between local residents and the government’s law and security forces essential to deterring and capturing members of violent extremist organizations. Conflict mitigation and stabilization effort CVE Messaging Center—White Dove (Farar Tattabara) State’s description of effort and its goals State’s Bureau of African Affairs, Office of Public Diplomacy and Public Affairs, conducts this effort through a cooperative agreement and grant to an implementing partner. This effort supports the establishment of a messaging center to produce three original radio programs in the Hausa language broadcast weekly over 22 stations across 19 states of northern Nigeria. The program also includes a social media component. The three radio programs deal with themes of de-radicalization, rehabilitation, and reintegration. The primary goal is to produce and disseminate counter-violent extremism organization messaging to mitigate efficacy of such organizations’ propaganda and recruitment efforts. State’s Bureau of African Affairs, Office of Security Affairs, conducted the Ending Labor Exploitation of Almajiri Children and De-Escalating Insecurity project through a grant to an implementing partner. The project aimed to reduce vulnerabilities associated with the Almajiri education system by (1) enhancing public awareness of the threat presented to community security by the present state of degeneration of the system of Almajiri education; (2) mobilizing the voices of key community stakeholders, including teachers, parents, religious scholars and institutions; and (3) supporting the government to put in place adequate laws and policies to reform the system and combat exploitation of the Almajiri in the state of Kano. This effort was a TSCTP project, and the U.S. Embassy Abuja Public Affairs Section also supported it. The project’s goal was to contribute to ending the systemic labor exploitation and abuse of Almajiri children prevalent in the Almajiranci system of education, and to reduce the risk of violence and insecurity in Kano state in Northern Nigeria. This project ended on January 30, 2018 State’s Bureau of International Narcotics and Law Enforcement, Office of Africa and Middle East Programs, is responsible for the Equipment Procurements for Police in Northeast Nigeria effort. This program equips police commands, stations, and officers in northeast Nigeria. The equipment includes military-grade tents, ponchos, poncho stuff sacks, cots, flashlights, flashlight holsters, individual first aid kits, and portable emergency lighting for 1,500 officers. The goal of this effort is to increase the Nigerian Police Force’s capacity to provide security in the Northeast and to lay the foundation for the safe and voluntary return of displaced persons when conditions are conducive. State’s DRL, Office of Global Programming, is responsible for the Global Center on Cooperative Security, Promoting Resilient Communities in Nigeria and Kenya effort. The U.S. Embassy Abuja Political Section also supports this effort. This 2-year program is designed to support existing networks of young civil society leaders; forge new partnerships among local civil society organizations, young people, and government stakeholders; facilitate collaborative learning activities; and organize small grant assistance and in-kind support to local civil society organizations working to prevent violent extremism. The goal of the effort is to mitigate threats of violent extremism in Nigeria and Kenya by promoting community resilience and empowering youth leaders to recognize and prevent violence committed by groups such as Boko Haram and Al Shabaab. State’s description of effort and its goals State’s Bureau of African Affairs, Office of Security Affairs, conducted the Healing, Reconciliation, and Counter-Radicalization in Adamawa, Borno, and Yobe State project through a grant to an implementing partner. Project activities were designed to help resolve tensions between individuals returning to local communities and those who remained throughout periods of instability and to reduce prejudice and stigmatization of those captured by Boko Haram (especially women who were raped and impregnated, forced into marriage, and/or kept as sex slaves). Community resilience groups were also created to promote community cohesion through the use of strategic communications and counter narratives. This effort was a TSCTP project, and the U.S. Embassy Abuja Public Affairs Section also supported it. This project ended on May 31, 2018. State’s Bureau of International Narcotics and Law Enforcement Affairs, Office of Anti- Crime Programs, is responsible for the International Law Enforcement Academy Program (ILEA)—Countering Violent Extremism Series. Nigeria is one of the member countries of ILEA Gaborone, ILEA Roswell, and the West Africa Regional Training Center in Accra. In fiscal year 2017, Nigerian law enforcement and criminal justice system personnel participated in a specialized Countering Violent Extremism (CVE) course series, which included anticorruption, community policing, combatting CVE in prisons, threat finance, post-blast investigations, and law enforcement techniques to combat terrorism. The ILEA program generates course schedules annually based on feedback from participant countries, like Nigeria, as well as U.S. federal law enforcement, and State functional and geographic bureaus. The program is also a cooperative effort that involves the expertise of trainers and agents from federal, state, municipal, and foreign law enforcement agencies. The ILEA program pursues three core objectives: building the capacity of foreign criminal justice partners of the United States to stop crime before it comes to the United States, fostering partnerships across national borders within important regions of the world, and advancing partner nations’ engagement with U.S. law enforcement agencies. The ILEA program is an important part of the interagency U.S. effort to combat transnational criminal organizations and combat violent extremism, which facilitates stability in individual countries and regions, including Nigeria. State’s Bureau of International Narcotics and Law Enforcement, Office of Africa and Middle East Programs, awarded funds to the U.S. Institute of Peace to conduct the Justice and Security Dialogues project. Under this effort, citizens and authorities work to jointly address important security challenges within select communities of the Sahel and Maghreb, including in Nigeria. Participants share knowledge and skills and support each other across the broader region. The project is targeting a community population of 430,000 in the north local government of Jos in Plateau state. The goal of the effort is to improve the relationship between security providers and citizens and to support civilian security forces to be more effective, accountable, and responsive to community needs. Conflict mitigation and stabilization effort Northern Governors Dialogue State’s description of effort and its goals State’s Bureau of Conflict and Stabilization Operations, Office of Africa Operations, awarded funds to the U.S. Institute of Peace to conduct the Northern Governors Dialogue. This effort supports governors of northern states, relevant federal government officials, and representative civil society leaders in addressing conflict drivers and stabilization-related challenges. The program is intended to strengthen their collective understanding of relevant issues and their capacity to develop sustainable and inclusive policies. The goal is to have an invested group of northern governors and a Senior Working Group of civil society leaders that have (1) identified a set of citizen-informed priority policy areas for northern Nigeria to prevent and resolve violent conflict, as well as to enhance stabilization efforts where appropriate, and (2) demonstrated a continued willingness to engage together on specific conflict-related issues. State’s Public Affairs Section at the U.S. Embassy Abuja, conducts the Open Minds Project through a grant to an implementing partner. This project intends to train and mentor 80 primary and secondary school students from Plateau state and Federal Capital Territory in critical thinking skills in support of CVE efforts. The goal is to better enable participants to resist messaging and recruitment efforts of violent extremist organizations State’s Bureau of Democracy, Human Rights, and Labor, Office of Global Programming, is responsible for the Search for Common Ground, Early Warning/Early Response effort. This program establishes community-based early warning and early response systems and strengthens the capacity of state and local actors to secure communities. The intent is to enhance community and state actors’ ability to protect citizens from imminent threats from Boko Haram. Overall goals of the program are to increase capacity of target communities to identify and analyze early warning signs of violence; to increase collaboration between communities and local government officials and security actors in responding to these signs; and to enhance mutual understanding of their roles in protecting their communities. State’s Public Affairs Section at the U.S. Embassy Abuja conducts the Strengthening Community Resilience through Peace Building project through a grant to an implementing partner. The project intends to train 50 youth in conflict resolution. The participants, supported by traditional elders, engage in local community-driven initiatives. The goal is to strengthen conflict resolution capacity at the community level by promoting peaceful dialogue and tolerance in S. Kaduna state. State’s Bureau of African Affairs, Office of Security Affairs, conducts this effort through a grant to an implementing partner who is to produce and air 52 episodes of a weekly radio drama based on stories of victims of the Boko Haram insurgency, especially women and children. The series focuses on reducing the risks of radicalization and recruitment, while encouraging adult listeners to reflect on the effects of the insurgency on their communities and vulnerable groups. The B Chronicles, created in English but performed in Hausa and Kanuri, are interpreted by the actors and aired on radio stations in Bauchi, Gombe, Adamawa, Yobe, and Borno states. The series targets a regional audience of approximately 6–8 million people. The goal of this project is to chronicle and help mitigate the current security challenges in Northern and Northeastern Nigeria through real life stories that encourage dialogue while fostering peace, respect, and the spirit of community. This effort is a TSCTP project, and the U.S. Embassy Abuja Public Affairs Section also supports it. Conflict mitigation and stabilization effort Training Almajiri as Peace Promoters in Kano State’s description of effort and its goals State’s Public Affairs Section at the U.S. Embassy Abuja conducts the Training Almajiri as Peace Promoters in Kano project through a grant to an implementing partner. This project intends to train 240 students from the formal education system and the traditional Islamic school system (Almajiri) as peace ambassadors. Student participants advocate for peaceful conflict resolution, improvements in youth education, and incorporation of Almajiri schools into the formal educational system. State’s Public Affairs Section at the U.S. Embassy Abuja conducts the Training of Youth Leaders and Community Influencers effort through a grant to an implementing partner. The project intends to train 25 youth and community influencers from Adamawa, Borno, and Yobe states as CVE messengers with enhanced leadership skills. The goal is to develop peer-to-peer CVE messengers with proven community influence to mitigate propaganda and recruitment efforts of violent extremist organizations. State’s Public Affairs Section at the U.S. Embassy Abuja conducts the Transformation of Farmer/Herder Conflict in Plateau State effort through a grant to an implementing partner. This project convenes dialogues between farmer and herder stakeholders in Plateau state to develop mechanisms to resolve disputes between these groups. The goal is to establish a multistakeholder peace architecture committee to periodically review conflict risks and to develop a framework for adjudicating conflict. State’s Public Affairs Section at the U.S. Embassy Abuja, conducts the United in Diversity effort through a grant to an implementing partner. This project aims to increase a core team of 25 youths’ conflict resolution skills and, through a Training of Trainers model, to train additional youths. The goal is to facilitate interreligious dialogue between religious groups. State’s Bureau of African Affairs, Office of Security Affairs, conducts the Vocational Training for Women in Adamawa State through a grant to an implementing partner. This effort is a TSCTP project, and the U.S. Embassy Abuja Public Affairs section also supports it. This project intends to provide rural women living in IDP camps and the surrounding communities with training and employment opportunities in poultry and cash-crop farming to help raise their social status, enhance their self-esteem, and encourage self-reliance to contribute income to their households. The goal is to help these women learn to recognize and resist techniques and methods of recruitment and radicalization to violence; and provide options for resisting recruitment into violent extremist organizations. State’s Bureau of African Affairs, Office of Security Affairs, conducts the Youth for Healthy Communities Initiative through a grant to an implementing partner. This program is a community initiative anchored in athletic competition that offers concurrent workshops and creates social and mentoring networks to engage youth on issues of civic responsibility, conflict mitigation, and the dangers of drug abuse and violent extremism. This effort is a TSCTP project, and the U.S. Embassy Abuja Public Affairs Section also supports it. The goals of this program are to build teamwork and leadership skills, foster citizen responsibility, and counter drug abuse and the risk of recruitment and radicalization to violence among vulnerable youth in the Kano city metropolitan area. Conflict mitigation and stabilization effort SYRIA State’s Bureau of Near Eastern Affairs (NEA), Office of Near Eastern Affairs Assistance Coordination, is responsible for the Access to Justice and Community Security Program, which provides training, equipment, and stipends to Free Syrian Police stations in liberated areas of Syria. The United States supports 56 Free Syrian Police stations comprising approximately 3,500 officers. Support includes vehicles, equipment, stipends, and training to help moderate community security actors to establish public security and stand up local unarmed civilian police forces. State conducts this effort through an implementing partner, and NEA manages this effort as part of the Syria Transition Assistance Response Team based in U.S. Embassy Ankara. The program’s goal is to improve local stability, mitigate sectarian violence, and counter the influence of violent extremists. State’s NEA, Office of Near Eastern Affairs Assistance Coordination, conducts the Building the Legitimacy of Local Councils effort through an implementing partner. NEA manages this effort as part of the Syria Transition Assistance Response Team, which is based in U.S. Embassy Ankara. The effort aims to build the capacity of local and provincial councils and civilian networks through (1) organizational development, standardized processes, and institutional capacity for effective civil administration; (2) strengthened cooperation between local and provincial councils, civil society organizations, Free Syrian Police, technical directorates, and moderate armed actors; (3) increased engagement between citizens and opposition governance structures; (4) increased inclusiveness in governance structures, especially with regard to representation of women, religious and ethnic minorities, and other marginalized populations; and (5) more effective provision of basic local governance services to meet citizen priorities and needs through cash subgrants for essential services. The goal of the effort is to strengthen the moderate Syrian institutions by building their capacity to provide services, promote stability, counter extremism, and advocate for political dialogue. State’s NEA, Office of Near Eastern Affairs Assistance Coordination, conducts the Civil Society in Syria effort through an implementing partner. NEA manages this effort as part of the Syria Transition Assistance Response Team, which is based in U.S. Embassy Ankara. Through cash subgrants, this effort works to enhance civil society and advocacy organizations in eastern and western Syria to implement activities that (1) improve communication mechanisms with constituents and key stakeholders in reconciliation, conflict mediation, and advocacy efforts; (2) increase citizen understanding of rights and civic responsibilities; (3) enhance civil society advocacy efforts to promote strengthened competitive, inclusive, and transparent political processes; (4) improve organizational structures and internal processes that allow civil society organizations to become more effective public advocates; and (5) provide community services, such as vocational training for women and youth and essential services in areas newly liberated from ISIS where governance bodies are still emerging. The goal of the effort is to increase the ability of civil society organizations to serve, represent, and advocate for all Syrians and hold local governance structures accountable. State’s description of effort and its goals State’s DRL conducts the Civil Society Support for Peacebuilding, Reconciliation, and Conflict Mitigation effort through implementing partners. These efforts provide funding to build local leadership and reconciliation processes and to support activities related to inclusive peace-building and conflict mitigation that are specifically designed to be more responsive to the evolving nature of the conflict. Current programming focuses on local community members, including women, religious minorities, and other marginalized populations, to use advocacy and other skills needed to effectively engage with armed factions. This work also supports the political transition process by fortifying the conditions for stabilization and empowering local leadership. State’s Bureau of Political-Military Affairs supports ERW clearance efforts in areas of northeast Syria recently liberated from ISIS, in particular the urban centers of Raqqa and Tabqa cities. Following their defeat, ISIS placed mass-produced, technologically advanced IEDs and booby-traps in homes, public spaces, farm land, and infrastructure to discourage the return of IDPs and cut off essential services. As IDPs return to their communities, these devices continue to perpetuate ISIS’s reign of terror by indiscriminately killing civilians and impeding stabilization operations. ERW clearance programs, which State conducts through implementing partners, supports the urgent marking, survey and clearance of explosive hazards from critical infrastructure associated with the delivery of clean water, electricity, healthcare, education, and governance to facilitate follow-on stabilization projects, the restoration of basic community services, and the return of IDPs in coordination with USAID and other State offices. State’s DRL conducts the Meaningful Justice and Accountability for Syria efforts through implementing partners. These efforts involve the documentation of human rights violations committed by all parties; increased coordination among international and local civil society groups on transitional justice processes, including memorialization; and support to survivors of torture, sexual and gender-based violence, and other gross human rights violations. The goal is to support the capacity of local civil society groups to secure and preserve documentation of human rights abuses and increase advocacy around accountability and transitional justice mechanisms, including domestic and regional led efforts. State’s Bureau of Political-Military Affairs delivers Mine Risk Education, through nongovernmental organizations, to affected communities by teaching children and young adults about the dangers posed by explosive hazards. Also, due to the lack of national capacity, a mine action nongovernmental organization collects, stores, and disseminates data on areas contaminated and cleared to the coalition, nongovernmental organizations, humanitarian community, and military. State’s DRL awarded funds to the U.S. Institute of Peace to conduct the Strengthening Social Cohesion in Northern Syria effort, which aims to provide positive engagement and lines of communication across religious and sectarian groups, particularly in key districts prone to sectarian violence. The goals are to (1) support Syrian civilian networks to maintain stabilization and mitigate violence and (2) manage localized ceasefires, including reconciliation and stabilization of areas as they are being liberated. Conflict mitigation and stabilization effort Syria’s Education Program (Idarah/Injaz) State’s description of effort and its goals State’s NEA, Office of Near Eastern Affairs Assistance Coordination, conducts Syria’s Education Program through an implementing partner that works closely with opposition education directorates in Western Syria and moderate education actors in newly liberated areas in the east to (1) support the development of the Syrian Interim Government’s aligned Provincial Education Directorates and other education actors to better manage education in non–regime-controlled communities; (2) provide stipends and salaries for education staff to ensure schools have people to deliver education; (3) engage in teacher training; (4) provide light refurbishments and supplies for damaged schools, and; (5) provide psychosocial support and training to children, teachers, and community members. NEA manages this effort as part of the Syria Transition Assistance Response Team, which is based in U.S. Embassy Ankara. The goal of this effort is to improve equitable access to Syrians to moderate, vital education services for youth and children. We did not independently verify whether State’s reported list of conflict mitigation and stabilization efforts included all such efforts in Iraq, Nigeria, and Syria (and in neighboring countries for Syria). For the purposes of this list of efforts and goals, “efforts” includes what our sources referred to as “programs,” “program-level initiatives,” and “projects.” Countries for which State conducts efforts are shaded in gray. Appendix III: USAID Reported Conflict Mitigation and Stabilization Efforts for Iraq, Nigeria, and Syria, Fiscal Year 2017 Conflict mitigation and stabilization effort IRAQ USAID’s description of effort and its goals The U.S. Agency for International Development (USAID), along with other international donors, supplies funding to the UNDP FFS. The UNDP, at the request of the Prime Minister of Iraq, and with support from leading members of the Coalition to Degrade and Defeat the Islamic State of Iraq and the Levant (ISIL), established the FFS in June 2015 to help rapidly stabilize newly retaken areas. The FFS works in areas liberated from the Islamic State of Iraq and Syria (ISIS)—another name for ISIL—to restore essential services and kick-start the local economy. The FFS rehabilitates water, health, electricity, education, and municipal light infrastructure. The FFS also provides temporary employment to local laborers to remove rubble and grants to small businesses to restock and reopen. The aim of the FFS is to help restore confidence in the leading role of the Iraqi government in newly retaken areas, give populations a sense of progress and forward momentum, and enable the voluntary return of internally displaced persons. USAID’s Office of Peace and Democratic Governance (PDG) is responsible for the Building Bridges Between Herders and Farmers in Nasarawa, Plateau, and Kaduna States effort. The overall goal is to strengthen engagement and understanding to reduce conflict between the nomadic pastoralist and sedentary farming communities in the three states. Given the herders’ and farmers’ ethnic, religious, economic, and lifestyle differences, these two groups rarely come into contact with each other outside of confrontational scenarios or passing encounters, creating a deadly social disconnect that risks dehumanizing each community in the other’s eyes. The program aims to achieve its goal by (1) improving intercultural understanding between nomadic pastoralist and sedentary farming communities and (2) building capable coalitions between community leaders, civil society, and government to prevent conflict between nomadic pastoralist and sedentary farming communities. USAID’s Education Office is responsible for the ECR, which, addresses the main learning needs of internally displaced and host community pupils affected by the crisis in Northeast Nigeria through nonformal learning centers, Youth Learning Centers, and Adolescent Girls Learning Centers. The ECR provides learning in protective centers, supports integration of pupils from nonformal to formal schools, and works within communities hosting internally displaced persons. For example, the ECR established more than 935 nonformal learning centers that provided services to internally displaced children and youth and their host communities affected by violence in Adamawa, Bauchi, Borno, Gombe, and Yobe. Nonformal centers may be located in churches, mosques, Qur’anic schools, and other locations. The services provided included access to quality education, psycho-social counseling, child-friendly spaces, and opportunities for peer reading, mentoring, counseling, and vocational skills training. The ECR also trains and mobilizes instructors to provide conflict-sensitive lessons, while engaging communities and local leaders to increase education options, such as nonformal learning centers. The ECR has provided assistance to over 80,341 individuals since 2014. The overall goal is to support the efforts of northeastern states and local governments to take full ownership for the continued education of internally displaced children. Conflict mitigation and stabilization effort Engaging Communities for Peace in Nigeria USAID’s description of effort and its goals USAID’s PDG is responsible for the Engaging Communities for Peace in Nigeria effort. The initial goal was to reduce violence between farmers and pastoralists in Nigeria’s Middle Belt states in target sites by (1) strengthening the capacity of farmer and pastoralist leaders to resolve disputes in an inclusive, sustainable manner; (2) leveraging social and economic opportunities to build trust across lines of division; and (3) fostering engagement among farmer-pastoralist communities, local authorities, and neighboring communities to prevent conflict. Under a scope and cost extension, PDG expanded the effort to help with conflict sensitivity integration throughout the USAID mission’s portfolio, and build the technical and operations capacity of nongovernmental organizations working on peace building in the northeast. PDG intends to do this by providing (1) conflict mitigation, monitoring and evaluation, and administrative/financial management training to civil society organizations in the northeast, and (2) conflict analysis and conflict mitigation training for USAID mission personnel and implementing partners anywhere in the country. USAID’s Office of Transitional Initiatives (OTI) launched the Nigeria Regional Transition Initiative in September 2014 to minimize conditions that allow terrorism to flourish, in turn reducing Boko Haram and ISIS-West Africa recruitment and support for their ideology and the insecurity they cause. Following a Strategic Review Session in September 2017, OTI established a new program goal: to deny terrorists space to operate. The goal has a two pronged focus: (1) to “compete” with ISIS-West Africa, thereby reducing its appeal before it is able to seize and hold significant territory and (2) to continue to work on issues that weaken Boko Haram’s ability to operate. OTI’s two main objectives to achieve this goal are to offer alternatives to extremist action for vulnerable individuals and increase community resilience to extremist action. Training of Religious Leaders for National Coexistence (TOLERANCE) USAID’s PDG is responsible for the TOLERANCE effort, which aims to support stability in Nigeria by enhancing the legitimacy and capacity of governance structures to defend religious freedom. TOLERANCE supports community-based peacebuilding approaches by strengthening the capacity of religious and traditional leaders, women and youth groups, government officials, and civil society to mitigate and manage conflicts, and improve responses to threats and outbreaks of violence. TOLERANCE is implemented in seven states—Borno, Bauchi, Imo, Kaduna, Kano, Plateau, and Sokoto. A human rights funding component promotes the culture of interfaith peaceful coexistence between target states in the North and South, respect for human rights, religious freedom and nonviolent elections. The goal of TOLERANCE is to develop an active network of religious, government, and civil society leaders that can effectively address ethno-religious violence in Northern Nigeria and beyond through shared strategies and common messages that have strong resonance and popular support from a wide range of stakeholders. Conflict mitigation and stabilization effort SYRIA Contributions to the Syria Recovery Trust Fund (SRTF) USAID contributes funding to the SRTF, a multidonor trust fund initiated by the Group of Friends of the Syrian People and its Working Group on Economic Recovery and Development. The SRTF’s core objective is to relieve the suffering of the Syrian people affected by the ongoing conflict through recovery and rehabilitation efforts undertaken in partnership with the Interim Government of the Syrian Opposition Coalition, local councils, local community organizations, and service providers. While the conflict continues, the SRTF assists Syrian communities in opposition-controlled territories by funding essential services and early recovery programming in critical sectors, including health, electricity, water, agriculture and food security, education, and waste management. For example, the SRTF completed the renovation of two gynecological operating rooms, two obstetrics rooms, adult and pediatric intensive care units, and provided incubators, an oxygen generation system, and 6 months’ worth of essential medications to a hospital in Aleppo Governorate so that it could treat an average of 1,000 patients each month. More than 2 million Syrians have received assistance through more than 30 SRTF projects. USAID funds totaling almost $60 million to date have leveraged other donor funds totaling $190 million. USAID’s goal is to support the restoration of essential services and early recovery. USAID’s Bureau for the Middle East (ME) provides support for the SRTF. USAID’s ME is responsible for the PRIDE program, which supports the establishment of robust, inclusive, effective, and accountable democratic processes and institutions in opposition-held areas and areas liberated from ISIS and advances freedom, dignity, and development. The goal of the program is to increase political and civic participation and representation of women, youth, and minorities, to foster public and stakeholder confidence in peaceful and representative transitional political processes and bolster opposition credibility. PRIDE is also intended to increase knowledge and understanding of democratic processes among the Syrian population, including consensus building, coalition formation, citizen and stakeholder engagement, and elections, which will enhance an inclusive Syrian-led transition. USAID’s ME and the Offices of U.S. Foreign Disaster Assistance and Food for Peace are responsible for the SLS program, which is intended to help increase production and productivity of key products that have both food security and market potential, in moderate, opposition-held areas and areas liberated from ISIS. The effort is based on the theory that if communities have humanitarian support in the short-term and have access to agricultural inputs and extension, they will adopt behaviors that increase productivity along with household-level income, ultimately improving food security and resilience to shocks. ME and the Office of Foreign Disaster Assistance have funded an implementing partner to initiate this effort in fiscal year 2017. If this effort is successful, USAID intends to replicate this effort in other barley-belt areas of Syria, including in the Idleb, Raqqa, and Hasakah governorates. USAID’s ME is responsible for the SES II effort, which supports the restoration of essential services through local councils in communities. The essential services include support for water services, electricity, sewage systems, public use buildings, agricultural infrastructure, and market access. The program provides technical and material assistance, including capacity building for local councils and civil society, engineering expertise and other training, and cash grants to communities. The goal of the program is to restore essential services and strengthen institutions in non-regime areas. Conflict mitigation and stabilization effort Syria Regional Program (SRP) USAID’s description of effort and its goals USAID’s OTI is responsible for the SRP. The SRP works closely with trusted and vetted local organizations to implement quick-impact activities that promote an inclusive and stable Syria. OTI has conducted this effort since 2012 through an implementing partner that has implemented about 538 activities through about 155 local and provincial partners and 570 subpartners with a budget of about $172.5 million. OTI works along three lines of effort: (1) enable the early recovery of areas liberated from ISIS; (2) strengthen communities’ ability to resist extremist groups; and (3) maintain and increase the influence of strategic moderate actors. For example, OTI partners restore services in communities liberated from ISIS to reduce ISIS’s appeal; support local councils and civil society organizations, increasing the influence of moderate actors in strategic areas where extremist groups are vying for control; and support Syrian Civil Defense and impartial emergency responders who amplify the voice of Syrians struggling against extremism and authoritarianism. OTI aims to support resistance to extremists, particularly ISIS, by strengthening individuals and groups who are saving lives, meeting basic needs, promoting moderate values, and engaging with vulnerable populations. We did not independently verify whether USAID’s reported list of conflict mitigation and stabilization efforts included all such efforts in Iraq, Nigeria, and Syria (and in neighboring countries for Syria). For the purposes of this list of efforts and goals, “efforts” includes what our sources referred to as “programs,” “program-level initiatives,” and “projects.” USAID conducted its efforts through grants and contracts to implementing partners. Countries for which USAID conducts efforts are shaded in gray. Appendix IV: DOD Reported Stabilization Efforts for Iraq and Syria, Fiscal Year 2017 Conflict stabilization effort IRAQ DOD’s description of effort and its goals Medical Staff of the Combined Joint Forces Land Component Command–Operation Inherent Resolve provided immediate medical trauma supplies to the World Health Organization to fill a gap in medical supplies available to treat injured civilians. The project was coordinated with the Department of State (State) and the U.S. Agency for International Development (USAID) and was funded through the Overseas Humanitarian, Disaster, and Civic Aid (OHDACA) appropriation. The project was intended to increase the chance of survival for civilians affected by military operations; increase civilian confidence in the government and the humanitarian assistance community; and provide access, influence, and visibility to the Department of Defense (DOD). U.S. Army Civil Affairs (CA) personnel of Special Operations Joint Task Force–Operation Inherent Resolve (SOJTF–OIR) provided winterization kits including jackets, hats, gloves, socks, and blankets to Syrian civilians displaced from their homes in the Raqqa region. The project provided much needed cold weather items. This project was coordinated with State and USAID and funded through the OHDACA appropriation. The project was intended to alleviate human suffering; pull the population away from Islamic State of Iraq and the Levant (ISIL) population centers; and provide access, visibility, and influence for DOD forces. U.S. Army CA personnel of SOJTF–OIR provided 1,200 winterization kits consisting of jackets, hats, gloves, and socks to Syrian families in the Hamad desert. This project addressed a critical need among the poorest and most vulnerable of the Syrian population. The project was coordinated with State and USAID and was funded through the OHDACA appropriation. The project was intended to alleviate human suffering; support DOD efforts to diminish ISIL influence; and provide access, visibility, and influence for DOD forces. U.S. Army CA personnel of SOJTF–OIR provided assistance, including food, cooking fuel, construction material, and garbage removal, for up to 31,000 civilians in Manbij, Syria. DOD undertook this project because USAID and State were unable to provide any support to the civilians in need. This project was coordinated with State and USAID and was funded through the OHDACA appropriation. The project was intended to alleviate human suffering and improve the civilian populace’s perception of the local council. U.S. Army CA personnel of SOJTF–OIR provided basic education supplies and equipment, including desks, chairs, and whiteboards, to schools in Karamah. This project was coordinated with State and USAID and funded through the OHDACA appropriation. The project was intended to assist in reestablishment of education services in the area, enhance the local council’s ability to provide essential services and increase their standing with the community, and provide access to DOD forces operating in the area. U.S. Army CA personnel of SOJTF–OIR provided basic education supplies and equipment, including desks, chairs, whiteboards, and backpacks, to schools in Kobani. This project was coordinated with State and USAID and funded through the OHDACA appropriation. The project was intended to assist in reestablishment of education services, improve the capacity of the local government to provide essential services; improve the perception of the local council; and provide access, visibility, and influence for DOD forces. Conflict stabilization effort Manbij School Supplies DOD’s description of effort and its goals U.S. Army CA personnel of SOJTF–OIR provided classroom furniture and school supplies to 4,000 students in Manbij. The project, managed through the local council, provided a viable opportunity to resume attending classes for students who had not attended school in over 4 years. The project was coordinated with State and USAID and funded through the OHDACA appropriation. The project was intended to assist in reestablishment of education services; improve the perception of the local council; and provide access, visibility, and influence for DOD forces. U.S. Army CA personnel of SOJTF–OIR provided winterization kits, including jackets, hats, gloves, socks, and blankets, to civilians in the Raqqa region. The project provided much needed winter clothing to civilians who had fled their homes due to ISIL operations. The project was coordinated with State and USAID and funded through the OHDACA appropriation. The project was conducted through the local council and intended to alleviate human suffering, build the council’s legitimacy, and provide access to DOD forces. U.S. Army CA personnel of SOJTF–OIR provided winterization kits, including jackets, hats, gloves, socks, and blankets to civilians in the Manbij region. The project provided cold weather items, through the local council, to civilians fleeing ISIL forces because State and USAID were unable to provide support. The project was coordinated with State and USAID and funded through the OHDACA appropriation. The project was intended to alleviate human suffering, elevate the standing of the local council with the populace, and improve access to DOD forces operating in the area. We did not independently verify whether DOD’s reported list of conflict mitigation and stabilization efforts included all such efforts in Iraq, Nigeria, and Syria (and in neighboring countries for Syria). For the purposes of this list of efforts and goals, “efforts” includes what our sources referred to as “programs,” “program-level initiatives,” and “projects.” Countries for which DOD conducts efforts are shaded in gray. Appendix V: USIP Reported Conflict Prevention and Resolution Efforts for Iraq, Nigeria, and Syria, Fiscal Year 2017 Conflict prevention and resolution effort IRAQ USIP’s description of effort and its goals The U.S. Institute of Peace’s (USIP) Middle East and Africa Center (MEA) is responsible for the Advancing the Role of Iraqi Minorities in Stabilization and Governance effort with funding from and in partnership with the Department of State’s (State) Bureau of Democracy, Human Rights, and Labor. This effort creates mechanisms for gathering and sharing high-quality information with key Iraqi decision makers and stakeholders on the minorities’ situations, regardless of whether these groups return home or remain displaced. The project utilizes and acts upon information gathered through facilitated local dialogues that prevent violence (especially violence stemming from revenge killing) and/or reduce tensions between displaced minorities and host communities. Improving access to this information is intended to strengthen the role of civil society in stabilization and enable Iraqi decision makers to enact more inclusive and information-based governance policies. The specific objectives are to (1) improve key decision makers’ understanding of conflict drivers in liberated and minority-rich areas and (2) reduce tensions among and between communities in Nineveh and other minority areas during the stabilization process and in the build-up to provincial-level, Kurdish Regional Government, and national elections. The goal of the effort is to improve stabilization and promote inclusive governance in areas liberated from the Islamic State of Iraq and Syria (ISIS) in Nineveh province and other minority-rich territories. USIP’s MEA and its strategic partner, Sanad for Peacebuilding, conduct the Facilitated Dialogues effort in Iraq. The effort supports facilitated, outcome-oriented dialogue processes that enable local reconciliation in areas liberated from ISIS. This type of engagement has two main objectives in the current context: (1) preventing revenge acts of violence by communities in conflict and (2) identifying and addressing the main barriers impeding the return of internally displaced persons (IDP). Such engagement is intended to increase the resilience of communities to the persistent threat of violent extremism from ISIS remnants, the Popular Mobilization Forces, or others. USIP’s Center for Applied Conflict Transformation (ACT) is responsible for the JSD – Lessons Learned effort. Approximately 200 security and community representatives from three major cities affected by the aftermath of ISIS participated in nine JSD sessions as part of an assessment on preventing violent extremism in Iraq. The project culminated in a conference attended by members of the JSD-Community of Practice (COP), a network of local leaders committed to dialogue processes established by USIP through its ongoing engagement in Iraq to support dialogue. The project’s three objectives are to (1) better understand local drivers of violent extremism through the multiple perspectives included in the JSD-COP, (2) strengthen capacity of the JSD- COP to continue efforts to sustain local stability and promote the rule of law, and (3) identify key lessons learned to further strengthen future JSD initiatives in the region. USIP’s ACT is responsible for the Mapping Post-ISIS Iraqi Religious Groups for Peace and Reconciliation effort. ACT is partnering with country teams to undertake mappings of influential religious actors, institutions, and ideas in conflict zones. This project identifies and maps influential religious leaders in specific conflict zones with the long- term goal of including them in future Iraqi-led mediations, dialogues, and peace and reconciliation efforts. USIP’s description of effort and its goals USIP’s MEA is responsible for the Problem-Solving Dialogues for Iraq’s Religious Minorities and Governance Issues with funding from and in partnership with State’s Bureau of Democracy, Human Rights, and Labor. The effort addresses tensions and disputes between the Christian and Shabak communities in Nineveh in the wake of ISIS, pushing toward outcome-oriented solutions through facilitated dialogues led by experienced Iraqi facilitators. This effort also provides the USIP-created Alliance of Iraqi Minorities (AIM) with experience in project development and execution as AIM seeks to improve its impact on the provincial budget process, curriculum reform, outreach, and influencing specific legislation pertaining to minorities. The effort supports AIM’s organizational capacity toward becoming more independent, self- reliant, and self-sustaining through developing the capacity and assuming total responsibility for its organizational, administrative, programmatic, financial, and logistical affairs. Establishing facilitated dialogues among Iraq’s religious minorities and, most importantly, between those groups and the majority Muslim communities, is especially important as Nineveh is home to one of Iraq’s largest concentration of minorities. The goal of the effort is for Iraqis—minorities in particular—to prevent the recurrence of violence through peaceful dialogue with each other and various stakeholders, including national, provincial, and local governments. USIP’s MEA is responsible for the Support to Sanad for Peacebuilding effort. This effort provides ongoing technical and financial support to USIP’s strategic national partner, Sanad, and the networks it manages, including the Network of Iraqi Facilitators and the Alliance of Iraqi Minorities. Sanad and its affiliated networks serve as a resource for conflict analysis, bringing disputing parties together through facilitated dialogue and providing technical expertise for training and peacebuilding. The goal, through helping Sanad become Iraq’s leading and self-sustaining peacebuilding organization, is to increase Iraqi capacity and leadership in conflict prevention and mitigation. USIP’s MEA is responsible for the Training Iraqis in Conflict Management effort. This project provides training to both governmental and nongovernmental organizations, including officials and civil society activists in Kurdistan working to prevent the escalation of tensions among the nearly 1.8 million IDPs located there and in local communities. It also provides technical support to the Kurdish Regional Government on the implementation of Iraq’s national action plan under United Nations Security Council Resolution 1325, and ongoing assistance to Iraq’s National Reconciliation Committee and other governmental bodies that play a key role in local and national reconciliation. The goal of the project is to enable a variety of Iraqi organizations to use the tools and skills taught to them by professional trainers and USIP staff to resolve local tensions that have the potential to reignite sectarian tensions on a large scale. Building the skills of Iraqis in this field is intended to enable them to solve issues stemming from extremist violence and local sectarian conflict without external aid, thus stopping violence at its sources before it spreads to other communities and causes further destabilization. USIP’s description of effort and its goals USIP’s ACT was responsible for the Youth Leaders’ Exchange with His Holiness the Dalai Lama. In November 2017, USIP and the Dalai Lama hosted a second annual dialogue with youth peacebuilders drawn from countries across Africa, Asia, and the Middle East, including Iraq. Many of these countries grapple with the world’s deadliest conflicts, as well as campaigns by extremist groups to incite youth to violence. The youth leaders are among their countries’ most effective peacebuilders. The dialogue with the Dalai Lama was intended to help them to build the practical skills and personal resilience they need to work against the tensions or violence in their homelands. The overarching goal was to strengthen the capacity of youth to create positive change as leaders and peacebuilders in their communities by partnering with more traditional leaders. USIP’s MEA is responsible for the development of a USIP strategy for countering violent extremism (CVE) for Nigeria that is integrated with its Nigeria country strategy and consistent with USIP’s overall CVE strategy. Working in collaboration with ACT, MEA partners with a local organization for project implementation and uses local staff for support. This effort is intended to further USIP’s current process of strengthening its Nigeria country strategy to guide program initiatives for its Africa team and USIP more broadly. The goal is to deepen and expand USIP’s programming and thought leadership in the field of CVE through initiatives based on an evidence-based assessment. USIP’s MEA and ACT are responsible for the Election Security Assessment. Together with selected partners, USIP began three assessment rounds in Washington, D.C., and Nigeria focused on assessing election violence risks and gaps in electoral security and peacebuilding planning. USIP works closely with State’s Nigeria desk, USAID’s political section, the USAID’s mission at U.S. Embassy Abuja, and relevant international and local partners engaged in election programming. The assessment will produce programmatic recommendations to address identified vulnerabilities and seize opportunities for the promotion of peaceful elections. The goal of the effort is to help ensure that the prevention activities by USIP, U.S. government partners and civil society are better integrated and evidence-based. Generation Change Fellows Program (GCFP) USIP’s ACT is responsible for the GCFP, which strengthens youth leaders’ peacebuilding skills and creates a community of practice through which they can learn from and mentor each other, share best practices, and work to create positive change in their communities. GCFP carefully selects small cohorts of dedicated peacebuilders aged 18–35 through a highly competitive application process. These Fellows hold leadership roles within their local communities and tackle challenges, from countering violent extremism to enhancing gender equality. The goal of the GCFP is to increase youth leaders’ participation in and contribution to conflict transformation and positive social change in conflict-affected communities. USIP’s ACT, with funding from and in partnership with State’s Bureau of International Narcotics and Law Enforcement Affairs, is responsible for the Justice and Security Dialogue Project in the Sahel and Maghreb. The project offers opportunities to develop, refine, and test models and tools through field pilot experimentation in six countries, including Nigeria. The project aims to strengthen the relationship between civilian security services and communities at the local level and to pilot a model for bridging the gap between police and citizens for use across the region. Through a series of dialogues and activities supported by USIP and local partners, participants will collaboratively identify and address concrete security challenges at the local level. Conflict prevention and resolution effort Lake Chad Basin and Sahel Working Group USIP’s MEA is responsible for the Lake Chad Basin and Sahel Working Group. USIP USIP’s description of effort and its goals will convene a working group focused on addressing the drivers of violent extremism in the Lake Chad Basin and the Sahel. This will include developing a research framework, drawing on ACT’s CVE assessment tool, and commissioning a series of papers by academics, policy experts, and practitioners from countries across the region. The goal is to advance USIP’s thought leadership in the field of preventing violent extremism by studying the impact of the Boko Haram crisis in the context of broader regional dynamics and the potential for more regional approaches to foster resilience to violent extremism. USIP’s MEA is responsible for the Lake Chad Basin Project, with funding from and in partnership with State’s Bureau of Conflict and Stabilization Operations. This project builds upon over a decade of programming in Nigeria to implement a multiyear program that seeks to strengthen the capacity of Nigerian opinion leaders and policy makers, to foster sustainable and inclusive strategies toward addressing the root causes of violent conflict, particularly in Northern Nigeria. Some activities included (1) convening a 3-day symposium in Washington, D.C., of governors from states across northern Nigeria to foster key exchanges and critical discussions with leading American and international experts on the drivers of violent conflict in the region and how to resolve them; (2) creating a senior working group of 11 Nigerian civic leaders that can engage strategically with the governors and work collaboratively to articulate a set of policy priority areas toward addressing the drivers of conflict; (3) conducting quantitative and qualitative studies in Borno and Plateau states to understand citizen perceptions to the drivers of violent conflict, and how policymakers should address them; and (4) supporting sustained, facilitated engagement between the governors and members of the senior working group to help to shape a more inclusive policy platform toward preventing violent conflict and addressing stabilization needs in target states across the north. The goal of this project is to have an invested group of governors from across the northern states in Nigeria and a senior working group of civic leaders identify a set of citizen-informed priority policy areas for northern Nigeria to prevent and resolve violent conflict, increase stabilization efforts where appropriate, and demonstrate a continued willingness to engage together on specific conflict- related issues. USIP’s MEA is responsible for the Network of Nigerian Facilitators. USIP is identifying and supporting a group of community leaders, including youth, women, and religious leaders with dialogue facilitation skills to prepare, convene, and facilitate intergroup dialogues in their communities. In addition to building the abilities of the facilitators to locally manage conflict, USIP will provide financial support to the facilitators to implement localized conflict management activities. The goal is to build capacity and provide ongoing support to a network of community facilitators that can prevent and resolve conflict nonviolently. USIP’s MEA is responsible for the Nigeria Conversation Series. MEA partners with a local organization to implement the series and uses local staff for support. The series brings together a broad array of policy professionals for in-depth discussions on current issues in Nigeria and to explore options for preventing and resolving violent conflict in the country. The purpose of the series is to inform and influence Nigerian, U.S., and international policies and programs that seek to address conflict in Nigeria. The discussions seek to promote improved understanding and shared analysis of the conflict dynamics in the country through engagement with informed researchers and practitioners. Conflict prevention and resolution effort Nigeria’s Imam and Pastor: Faith at the Front USIP’s MEA is responsible for Nigeria’s Imam and Pastor project. In fiscal year 2017, USIP’s description of effort and its goals the findings from USIP research were used to inform the production of a short USIP video to contribute to understanding (1) the role of religious leaders in peacebuilding and (2) that grassroots dialogues are necessary for reducing violence but are complemented by changes in governance. Also, USIP produced a video series of pieces to highlight the work and voices of USIP’s country and partner organizations and provide practical tools to inform policymakers and partners in their work in reducing violent conflict. USIP’s ACT, with funding from and in partnership with USAID, is responsible for the Research on Violent Extremism, Politics, Religion, and the Higher Education Sector in the Lake Chad Basin effort. Under the rubric of the RESOLVE Network—a global consortium of research organizations established by USIP—this project is intended to enhance USAID’s assistance to the educational sector in the Lake Chad Basin region by providing research support for locally driven analysis in Nigeria, Chad, and Cameroon. The primary purpose of the RESOLVE Network initiative in the Lake Chad Basin is to assess the role of the state, civil society, and other nonstate actors in shaping the political divides over the role of religion in education and community and state responses to extremism in Chad, Nigeria, and Cameroon. USIP’s MEA is responsible for the Support to State Peacebuilding Institutions effort, which is being implemented by a local partner with the support of local USIP staff in Abuja. The Africa Team, in partnership with USIP’s ACT, provides training for the Plateau Peacebuilding Agency, the Kaduna Peace Commission and the relevant peacebuilding entities in the Borno state administration on conflict analysis, conflict management and facilitation. USIP delivers the training through a combination of online and in-person training. The Africa team identifies ways to engage the Interfaith Mediation Center (the Imam and the Pastor) to share their expertise and experiences. The goal is to advance the skills of the practitioner peacebuilding community in Nigeria to inform policy to prevent and resolve conflict at the state-level through online and in- person training. USIP’s MEA is responsible for the Supporting Transition to Civilian-Led Governance and Security effort, which is being implemented by a local partner with the support of local USIP staff in Abuja. The Africa team developed a framework for the transition from military and vigilante security to community-oriented policing through (1) research on comparative experiences in the transition from nonstate actors to civilian governance and (2) a series of roundtables and engagements with The Multinational Joint Task Force. The research seeks to incorporate USIP’s experiences in Afghanistan, Iraq, Colombia, Nepal, and Myanmar to offer concrete lessons, tools, and approaches. The goal is to contribute evidence-based and comparative research that will inform discussions on civil-military relationships, justice, security, and rule of law reform in the Northeast and Lake Chad Basin. USIP’s MEA is responsible for the Women Preventing Violent Extremism effort, with funding from and in partnership with State’s Bureau of Counterterrorism. The project is implemented by a local organization. This project began as a pilot project in 2012 and is designed to increase women’s agency and influence in strengthening community- level resilience to violent extremism through engagement and collaboration with security actors. The project was piloted in Plateau and Kaduna states in Nigeria and in Nairobi, Mombasa, and Garissa, Kenya. The project aims to understand ways in which trust and cooperation between women in civil society and the security sector can best be fostered and supported. USIP’s description of effort and its goals USIP’s ACT is responsible for the Youth Leaders’ Exchange with His Holiness the Dalai Lama. In November 2017, USIP and the Dalai Lama hosted a second annual dialogue with youth peacebuilders drawn from countries across Africa, including Nigeria; Asia; and the Middle East. Many of these countries face the world’s deadliest conflicts, as well as campaigns by extremist groups to incite youth to violence. The youth leaders are among their countries’ most effective peacebuilders. The dialogue with the Dalai Lama was intended to help them to build the practical skills and personal resilience they need to work against the tensions or violence in their homelands. The overarching goal was to strengthen the capacity of youth to create positive change as leaders and peacebuilders in their communities by partnering with more traditional leaders. USIP’s MEA is responsible for the Dialogues with the Interfaith and Other Key Leaders effort in partnership with and with funding from State’s Bureau of Democracy, Human Rights, and Labor. In Northeastern Syria, USIP works with Syrian partners to strengthen civil society’s engagement and coordinating role with civic, religious, and tribal leaders in al-Qamishli/al-Qahtaniya. The effort aims to address drivers of tensions and conflicts through an evidenced-based, outcome-oriented dialogue process. The overall goal is to strengthen social cohesion among and between the communities in Northern Syria, enable the return of displaced communities, and stem potential conflict. USIP’s MEA is responsible for three ongoing grants related to the Syria conflict in neighboring countries: The first is a grant to War Child to work with a local network of Jordanian organizations training young Syrian refugees in Amman and vicinity on youth leadership, peacebuilding, and conflict resolution skills. The two other grants fund (1) a Lebanese civic group that supported mediation and training aimed at reducing refugee-related tensions in Lebanon’s Bekaa Valley and to enable Syrian refugees to find jobs and register their children in schools, and (2) a nongovernmental organization that trained Syrian and Lebanese journalists on conflict-sensitive reporting about the Syrian refugee crisis and on raising awareness of the benefits the refugees bring to the host community. These grants are aimed at reducing tensions that threaten peace and stability in Lebanon and Jordan because of the burdens of their absorption of Syrian refugees. USIP’s ACT was responsible for the Youth Leaders’ Exchange with His Holiness the Dalai Lama. In November 2017, USIP and the Dalai Lama hosted a second annual dialogue with youth peacebuilders drawn from countries across Africa, Asia, and the Middle East, including Syria. Many of these countries face the world’s deadliest conflicts, as well as campaigns by extremist groups to incite youth to violence. The youth leaders are among their countries’ most effective peacebuilders. The dialogue with the Dalai Lama was intended to help them to build the practical skills and personal resilience they need to work against the tensions or violence in their homelands. The overarching goal was to strengthen the capacity of youth to create positive change as leaders and peacebuilders in their communities by partnering with more traditional leaders. We did not independently verify whether USIP’s reported list of conflict mitigation and stabilization efforts included all such efforts in Iraq, Nigeria, and Syria (and in neighboring countries for Syria). For the purposes of this list of efforts and goals, “efforts” includes what our sources also referred to as “projects.” Countries for which USIP conducts efforts are shaded in gray. Appendix VI: Comments from the Department of State Appendix VII: Comments from the U.S. Agency for International Development Appendix VIII: Comments from the Department of Defense Appendix IX: Comments from the U.S. Institute of Peace Appendix X: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Godwin Agbara (Assistant Director), Kathleen Monahan (Analyst-in-Charge), David Dayton, Martin de Alteriis, Mark Dowling, Emily Gupta, and Jasmine Senior made key contributions to this report. Additional assistance was provided by Michael Fahy, Christopher Keblitis, Judith McCloskey, James Reynolds, Kira Self, and Sarah Veale.
Why GAO Did This Study The United States has a national security interest in promoting stability in conflict-affected countries to prevent or mitigate the consequences of armed conflict, according to the 2017 National Security Strategy. State, USAID, and DOD have reported that a collaborative government approach is an essential part of maximizing the effectiveness of U.S. efforts in conflict-affected areas. GAO was asked to review U.S. conflict prevention, mitigation, and stabilization efforts abroad. This report (1) describes examples of conflict prevention, mitigation, and stabilization efforts that U.S. agencies and USIP conducted in Iraq, Nigeria, and Syria and their goals in fiscal year 2017 and (2) examines the extent to which U.S. agencies and USIP incorporated key collaboration practices to coordinate their efforts. GAO collected data from the agencies and USIP on their efforts and goals in Iraq, Nigeria, and Syria. GAO selected these countries based on U.S. national security interests, among other criteria. GAO reviewed agency and USIP documents, interviewed officials, and conducted fieldwork in Iraq, Nigeria, and Jordan. GAO assessed coordination against key practices identified by GAO to enhance interagency collaboration. What GAO Found The Departments of State (State) and Defense (DOD), the U.S. Agency for International Development (USAID), and the U.S. Institute of Peace (USIP)—an independent, federally funded institute—reported conducting various efforts to address conflict prevention, mitigation, and stabilization for Iraq, Nigeria, and Syria in fiscal year 2017. For example, in Iraq, State supported efforts to remove improvised explosive devices from homes and infrastructure (see figure); USAID contributed to the United Nations to restore essential services; DOD provided immediate medical trauma supplies to the World Health Organization to treat injured civilians; and USIP conducted facilitated dialogs to enable local reconciliation in areas liberated from the Islamic State of Iraq and the Levant. In conducting U.S. conflict prevention, mitigation, and stabilization efforts, State, USAID, DOD, and USIP have addressed aspects of key collaboration practices such as elements of bridging organizational cultures and leadership. However, the agencies have not formally documented their agreement on coordination for U.S. stabilization efforts through formal written guidance and agreements that address key collaboration practices. GAO found the following, for example, with regard to the extent key collaboration practices have been used by these entities. Bridging organizational cultures: U.S. agencies have established various mechanisms to coordinate their efforts, such as interagency working groups and staff positions focused on coordination. USIP convenes interagency actors, including State, USAID, and DOD through various programs and events. Defining outcomes and accountability: One or more agencies have established some common outcomes and accountability mechanisms for their stabilization efforts in Iraq, Nigeria, and Syria. Moreover, through an interagency review of U.S. stabilization assistance, State, USAID, and DOD identified a need to develop an outcome-based political strategy outlining end states for U.S. stabilization efforts and strategic analytics to track and measure progress, among other needs. Written guidance and agreements: Although State, USAID, and DOD have developed a framework for stabilization, they have not documented their agreement on the key collaboration practices identified, such as defining outcomes and accountability and clarifying roles and responsibilities. According to key practices for enhancing interagency collaboration, articulating agreements in formal documents can strengthen collaborative efforts, and reduce the potential for duplication, overlap, and fragmentation. What GAO Recommends State, USAID, and DOD should document agreement on their coordination for U.S. stabilization efforts though formal written guidance and agreements addressing key collaboration practices. The agencies concurred with the recommendations.
gao_GAO-18-16
gao_GAO-18-16_0
Background There are three main types of U.S. commercial fishing vessels: catcher vessels that catch fish and deliver them to shore for processing; tender vessels that purchase and transport fish from catcher vessels and resupply fishers with food, fuel, and other necessities; and fish processing vessels that both catch and process fish at sea. Commercial fishing vessels are also characterized by the type of fishing gear used, such as trawl nets, seine nets, gill nets, traps and pots, dredges, and hook and line. The targeted fish species determines the type of vessel and gear that fishers use in their operations. A commercial fishing vessel may participate in multiple fisheries, using various fishing gear, as needed. The Magnuson-Stevens Fishery Conservation and Management Act, as amended, provides for the conservation and management of fishery resources within the federal waters of the United States. The act defines “commercial fishing” to mean fishing in which the fish harvested, either in whole or in part, are intended to enter commerce or enter commerce through sale, barter, or trade. This act also created eight regional fishery management councils, which are responsible for preparing fishery management plans and setting annual catch limits for the fisheries within their areas of authority. NOAA’s National Marine Fisheries Service, under authority delegated from the Secretary of Commerce, provides support for regional fishery management councils and approves and implements fishery management plans and plan amendments. Figure 1 illustrates the eight fishery management councils. U.S. Commercial Fishing Vessel Classification and Safety Requirements Under federal statute, commercial fishing vessels are categorized as uninspected vessels and the Coast Guard generally does not have the authority to inspect the vessels during construction or regular maintenance. However, the Coast Guard is authorized to inspect all other commercial vessels such as freight, offshore supply, passenger, tank, and towing vessels. Through the inspection process, the Coast Guard ensures that a vessel’s structure is suitable, that equipment and accommodations are maintained in an operating condition consistent with safety of life and property conventions, and that the vessel complies with applicable marine safety laws and regulations. Safety issues aboard commercial fishing vessels have been a long- standing concern. Various studies identified the problems and considered possible solutions to improve commercial fishing safety, but implementing improved safety recommendations were largely left to the vessel owner’s discretion. Following the loss of entire commercial fishing vessel crews during the mid-1980s, Congress passed the Commercial Fishing Industry Vessel Safety Act of 1988, which required safety improvements and examination of commercial fishing vessels for safety equipment. The act also instructed the Secretary of Transportation to conduct a study of the safety problems on fishing industry vessels and make recommendations on whether a vessel inspection program should be implemented. In 1991, the National Research Council conducted this study, which included a comprehensive assessment of commercial fishing vessel safety and identified a range of issues, including vessel fitness, and safety and survival equipment, among other things. The Council found that developing casualty rates was hampered by the absence of reliable data on the number of fishing vessels, vessel material condition, exposure variables, and other factors. The Council recommended a holistic approach to fishing vessel safety, including establishing vessel and equipment standards as well as the development of a database to evaluate alternatives and monitor results. The Council stressed, however, the importance of balancing the anticipated benefits of a safety program with any costs that might be imposed through implementation. The Council also noted that classification costs would be borne principally by vessel owners and that the costs could be significant for individual vessel owners. Congress established classification requirements to address the construction and maintenance of fish processing vessels in 1988, and applied classification requirements to all types of commercial fishing vessels more broadly in 2010 and 2012 under the Coast Guard Authorization Act of 2010 and the Coast Guard and Maritime Transportation Act of 2012. In addition to classification requirements, Congress also established other requirements to improve vessel safety. For example, commercial fishing vessels that are 79 feet or longer, built after July 1, 2013, are required to have an assigned load line. A load line indicates the point where the waterline should reach when a vessel is properly loaded. Assignment of a load line, and issuance of a load line certificate, is conditional on the structural efficiency and satisfactory stability of the vessel, and on provisions provided for protection of the vessel and crew. As part of a load line certification, a vessel’s seaworthiness is assessed by evaluating a vessel’s watertight integrity, stability, and loading capacity. A vessel’s stability booklet, prepared as part of a stability assessment, instructs operators on how to distribute weight across a vessel to prevent capsizing under different operating conditions. Figure 2 illustrates legislation and policy that addresses commercial fishing vessel construction and maintenance from 1988 to 2016. Classification requirements differ by commercial fishing vessel type and length and are only applicable to vessels built after certain dates, as seen in table 1. To address safety on older commercial fishing vessels, the 2010 and 2012 acts also directed the Secretary of Homeland Security to develop an alternate safety compliance program for commercial fishing vessels that are at least 50 feet in length, built before July 1, 2013, and are 25 years or older. The Coast Guard drafted requirements for the program but, according to the Coast Guard, this program would have required a new rulemaking effort, and it suspended the effort in July 2016. At that time, the Coast Guard developed an Enhanced Oversight Program—through policy and its existing authorities—that focuses on older, non-classed commercial fishing vessels that may pose a greater risk of vessel and crew member loss. In addition, in January 2017, the Coast Guard issued a list of voluntary safety initiatives and good marine practices and encouraged vessel owners to implement these initiatives on all non- classed vessels where possible and reasonable. The Coast Guard is also currently working on aligning its existing regulations on commercial fishing vessels with the classification requirements introduced in the 2010 and 2012 acts. Vessel Classification Process and Procedures Through the classing process, classification societies, such as the American Bureau of Shipping (ABS), Det Norske Veritas Germanischer Lloyd (DNV GL), and RINA, address aspects of the vessel’s design, structural integrity, reliability and function of major systems, and accident prevention. Classification societies (1) establish and maintain standards for the construction and classification of vessels and offshore structures; (2) supervise construction in accordance with these standards; and (3) carry out regular surveys of vessels in service to ensure the compliance with these standards. Once a vessel is “classed” with a certificate indicating that it meets a minimum level of safety and quality, the vessel is subject to periodic inspection to verify that it continues to meet the applicable rules of the issuing classification society, or risks losing its classification certificate, which could prevent the vessel from operating legally. Figure 3 illustrates the classification process for vessel design, construction, and maintenance. Of the 39 U.S. fishing vessels classed by three societies, as shown in table 2, at least 29 are fish processing vessels. Although commercial fish processing vessels built or converted after July 27, 1990, are required by U.S. law to be classed, the law permits a vessel to be exempted from this and other statutory requirements under certain conditions. Few commercial fish processing vessels have an active class certificate. Older U.S. fish processing vessels—most of which operate off of the coast of Alaska—generally fall under the Coast Guard’s Alternative Compliance and Safety Agreement Program, which is implemented pursuant to exemption authority provided under law. Under this program, vessel owners apply with the Coast Guard for an exemption from classing and load line requirements so long as the vessel meets improved safety standards provided for under the program. Federal Agencies Involved in Commercial Fishing Vessel Safety Several different federal agencies play a role in overseeing and promoting commercial fishing vessel safety: Coast Guard: The only military service within the Department of Homeland Security (DHS), search and rescue activities and marine safety activities number among the Coast Guard’s primary missions. As part of the safety activities, the Coast Guard performs mandatory safety inspections, conducts accident investigations, and promotes accident prevention involving vessels at sea. In 2015, the Coast Guard also began performing mandatory examinations of safety equipment onboard commercial fishing vessels. The Coast Guard records all interactions with vessels, including commercial fishing vessel accidents, in the Marine Information for Safety and Law Enforcement database. Coast Guard regulations require vessel operators to report a marine casualty involving damage to the vessel or other property; injury or loss of life; or harm to the environment. The Coast Guard is also responsible for enforcing fishery management laws and regulations. National Institute for Occupational Safety and Health (NIOSH): As part of the Department of Health and Human Services’ Centers for Disease Control and Prevention, NIOSH is responsible for conducting research and making recommendations for new or improved work- related safety and health standards. For example, it has recommended that all fishing vessel operators conduct monthly safety drills as required by federal regulation; heed weather forecasts and avoid fishing in severe sea conditions; and maintain watertight integrity by examining and monitoring the hulls of their vessels. NIOSH maintains a Commercial Fishing Incident Database, which mostly is comprised of data on fishing industry fatalities abstracted and coded from reports of Coast Guard investigations of marine casualties. National Transportation Safety Board (NTSB): NTSB investigates commercial fishing vessel accidents that involve the most significant damage and loss of life. NTSB conducts investigations (sometimes in parallel with the Coast Guard) to determine the probable cause of vessel accidents and issues safety recommendations aimed at preventing future accidents. For example, with regard to commercial fishing vessels NTSB recommends regularly conducting safety drills as well as proper training in stability and firefighting, and wearing a flotation aid at all times while working on deck. National Marine Fisheries Service: National Marine Fisheries Service uses fishery observers and at-sea monitors to collect data from U.S. commercial fishing vessels to monitor federal fisheries, assess fish populations, set fishing quotas, and inform fishery management practices. Under federal regulations, fishing vessels that may carry a fishery observer as part of a required or voluntary observer program generally must pass a Coast Guard commercial fishing vessel safety examination and be issued a safety decal. Further, under federal regulations, fishery conservation and management measures must, to the extent practicable, promote the safety of human life at sea, and should minimize or mitigate safety impacts where practicable. Number of Commercial Fishing Vessel Accidents, Injuries, and Fatalities Varied from 2006- 2015, but Rates Cannot Be Determined The Coast Guard investigated 2,101 commercial fishing vessel accidents between 2006 and 2015 that were identified as occurring in federal waters. While the number of accidents in 2015 was greater than the number reported in 2006, the number of injuries and fatalities declined over the same 10-year period. We could not assess the number of accidents, injuries, and fatalities by fishery due to limitations with the Coast Guard’s data. In addition, we were unable to calculate the rates of commercial fishing vessel accidents, injuries, and fatalities because reliable data on certain information needed to do so—including the total number of vessels that are actively fishing and the fishery or region in which the vessel operates—are either not maintained or are not collected by the Coast Guard or other federal agencies. Number of Commercial Fishing Vessel Accidents in Federal Waters Varied from 2006-2015 and Instances of Injuries and Fatalities Have Declined Between 2006 and 2015, the Coast Guard investigated 2,101 commercial fishing vessel accidents that occurred in federal waters. Coast Guard data indicates that the numbers of accidents generally increased through 2013 before falling slightly over the next two years, but remains above the level experienced in 2006. Of those, the Coast Guard investigated 193 serious marine incidents—those resulting in death, injury, or significant property damage, or involving environmental damage in federal waters. Figure 4 shows the number of commercial fishing vessel accidents and serious marine incidents that occurred in federal waters for 2006 through 2015. From 2006 through 2015, 598 of 2,101 commercial fishing vessel accidents in federal waters resulted in an injury and/or fatality. These accidents resulted in a total of 507 injuries and 182 fatalities over this period. Coast Guard data indicate that the number of injuries and fatalities have been declining since 2012, and 2015 figures are substantially below the levels reported in 2006, as seen in figure 5. Due to limitations with the Coast Guard’s data, we were unable to portray numbers of accidents, injuries, or fatalities by fishery located in a specific geographic location. Although we identified the area in which each commercial fishing vessel accident occurred, using latitudinal and longitudinal information included in the Coast Guard’s database, we could not reliably assign each accident, injury, or fatality to a fishery managed by interstate marine fisheries commissions or fishery management councils—entities which manage fishery resources in state and federal waters, respectively. National Marine Fisheries Service officials stated that even though an accident that occurred in an area of federal waters that falls within a jurisdiction of a particular council, the vessel may not have been participating in a fishery managed, either solely or in part, by that council. Data on a vessel’s intended fishery on the day of the accident provides accurate information on the intended area in which a vessel should be operating. Assigning a commercial fishing vessel accident to a specific fishery management council on a solely geographic basis—without consideration of the vessel’s targeted fishery—could overestimate the prevalence of accidents in a council jurisdiction. While the Coast Guard’s database includes a field for a vessel’s fishery, these data were not collected for the majority of commercial fishing vessel accidents between 2006 and 2015. An official in the Coast Guard’s Office of Investigations and Analysis stated that data on a vessel’s fishery is not required in order to complete an accident investigation and, therefore, may not be collected. Federal internal control standards establish that management should obtain relevant, accurate data from reliable sources in a timely manner, and recommend that agencies’ management use quality information to make informed decisions and evaluate the entity’s performance in achieving key objectives and addressing risks. The lack of complete and reliable data on the vessel’s fishery in the Coast Guard’s database hinders efforts to assess whether particular fisheries experience higher numbers of accidents, injuries or fatalities than others. Such information would benefit the Coast Guard’s analysis of commercial fishing vessel accidents, injuries, and fatalities because information on a vessel’s fishery can be used for a regional analysis of these events. Relevant Federal Agencies Do Not Collect Reliable Information on the Active Fleet to Enable Calculation of Rates The Coast Guard and other federal agencies do not collect data on the total number of vessels that are actively fishing—those that are operating, landing, and selling catch—and we found that existing data on the population of commercial fishing vessels are not sufficiently reliable to calculate rates of commercial fishing vessel accidents, injuries, and fatalities. Data on the total number of commercial fishing vessels actively catching and processing fish are necessary to determine rates—the ratio of the number of accidents, injuries, and fatalities that occurred compared to the total number of active commercial fishing vessels. These rates, if based on reliable data, would establish trend information on the number of accidents involving commercial fishing vessels. While the Coast Guard collects some data on commercial fishing vessels that operate in federal waters—including a vessel’s length and construction date—data on the population of the active U.S. commercial fishing vessel fleet are not reliably known. The Coast Guard’s National Vessel Documentation Center maintains a registry of valid certificates of documentation—that indicate that a vessel is registered with the Coast Guard and is greater than 5 net tons—for commercial fishing vessels that operate in federal waters. However, even when the Coast Guard could identify the number of documented vessels, we found the data they provided were unreliable for determining the total number of commercial fishing vessels that are actively fishing. For example, a senior Coast Guard official estimated that more than 20 percent of the vessels documented in 2015 were not actively fishing and may not be operational or otherwise not in use. As part of vessel registration, the Coast Guard collects information on a vessel’s length and date of construction. Other data, however, such as the fishery located in a specific geographic location in which a vessel operates, are not collected. Data on key characteristics of the total number of commercial fishing vessels actively fishing—including vessel length, age, and fishery or region of operation— would provide additional information when analyzing rates of commercial fishing vessel accidents, injuries, and fatalities. Other federal agencies involved in the commercial fishing vessel industry also do not collect data on the total number of active U.S. commercial fishing vessels. Having a national count of federally-permitted commercial fishing vessels can be used, in part, to help determine the number of commercial fishing vessels that are actively fishing. Federal permits are required for commercial fishing vessels that fish in certain fisheries and, according to officials from the National Marine Fisheries Service, these fishing permits are issued by NOAA’s regional offices and each regional office manages its own data. National Marine Fisheries Service officials stated that they are developing a national count of federally-permitted commercial fishing vessels, but a competing priority delayed this effort and noted it will recommence in the coming year. However, just because a vessel has a permit, it does not mean it is an active vessel, and additional data on vessel activity—such as information from log books, fish tickets, and fishery observers—is needed to identify vessels that are actively fishing. Similarly, a statistician from NIOSH—the federal agency that maintains data on commercial fishing fatalities and is responsible for conducting research and making recommendations for the prevention of work-related injury and illness—stated that he has encountered challenges estimating the total size of the active U.S. commercial fishing fleet because the majority of commercial fishing vessels are state-registered, and comprehensive data on the number of state-registered vessels are not available. Coast Guard officials acknowledged that they do not collect data on a state-registered vessel, unless the Coast Guard has been in contact with the vessel. Officials from the Coast Guard and the National Marine Fisheries Service agreed that it is important to calculate rates to assess the number of commercial fishing vessel accidents, injuries, and fatalities. At present, however, no particular federal agency has collected or calculated the national number of active commercial fishing vessels—those that are fishing and selling their catch—or the region and fishery in which these vessels operate. Once a reliable count of the number of active commercial fishing vessels is established, rates can be calculated by other characteristics such as the fishery or fisheries in which a vessel operates or vessel length. These rates would provide further insight into commercial fishing vessel accidents, injuries, and fatalities, including the percentage of vessels that are involved in an accident in a specific region or the percentage of accidents that involve vessels of a certain length such as, for example, vessels greater than 79 feet in length. Federal internal control standards establish that management should obtain relevant data from reliable sources in a timely manner, and recommend that agencies’ management use quality information to make informed decisions. The Coast Guard and the National Marine Fisheries Service are collecting data that could be used to develop an estimate of the total number of commercial fishing vessels that are actively fishing, however, each agency is taking a different approach, in part, because they are doing so for different purposes. Specifically, the Coast Guard collects data on commercial fishing vessels and the National Marine Fisheries Service collects data on permits for federally-managed fisheries, as well as other data on fishing activities. These data can be used, in part, to help determine the number of commercial fishing vessels that are actively fishing. In addition to the Coast Guard and the National Marine Fisheries Service, an agency, such as NIOSH—that is involved in commercial fishing vessel safety—could benefit from information derived from these ongoing efforts. Without such information, Congress and the agencies will lack important data needed to accurately assess the factors that contribute to commercial fishing vessel accidents, injuries, and fatalities. Establishing a mechanism—such as a working group—to coordinate efforts and collect reliable data on the number of active vessels and key characteristics, such as vessel age and length, would allow the agencies to do so in an efficient manner. While Data on the Costs of Classing Are Limited, Stakeholders Believe Classing Will Increase Ownership Costs We were able to obtain limited data on the costs of classification because only a total of six classed vessels have been built and builders and owners were reluctant to provide data on costs which they consider to be proprietary. Classification society representatives, vessel owners, and builders we interviewed agreed, however, that constructing and maintaining classed commercial fishing vessels will increase ownership costs, due, in part, to the fees charged by classification societies, the requirement to use certified materials and equipment, and annual maintenance surveys, among other costs. Despite the uncertainty as to how much classification will increase total ownership costs, vessel builders and owners stated that the potential costs associated with classing have contributed to reduced orders for new vessels and other changes. Extent to Which Classing Increases Design, Construction, and Maintenance Costs Is Uncertain All stakeholders we interviewed—classification society representatives, vessel owners, and builders—stated that classing will increase ownership costs. These stakeholders identified the following additional costs associated with constructing a classed commercial fishing vessel: naval architect fees for vessel design; additional builder engineering costs associated with finalizing classed classification society review of key equipment drawings and certification of equipment manufacturing; increased builder costs to construct vessel to classification society- approved design; additional supervision and testing during vessel construction; additional classification society design reviews and surveys, as needed, during vessel design and construction; and stability assessments and load line assignment. However, we were able to obtain only limited data on these costs as (1) few vessels have been constructed and classed by the societies included in our review and (2) the owners/operators and builders of these classed vessels are reluctant to share the associated cost documentation, considering it proprietary. Only six vessels have been constructed and classed since July 2013, when expanded classification requirements took effect. Two of these vessels—one tender and one catcher—were classed because they were subject to the July 2013 expanded classification requirements; the remaining four vessels were factory processors, which have been required to meet classification society standards since July 1990. All of the classed vessels constructed since July 2013 are greater than 130 feet in length and are owned by companies that own and operate multiple fishing vessels, with the exception of the tender vessel which is 67 feet long and owned and operated by a non-profit organization. Commercial Fishing Vessel Vessel type: Trawler (catcher or catcher/processor) Fleet length: 40-500 feet or longer Trawlers fish for pollock, cod, sole, rockfish, shrimp, and other species by towing funnel- shaped nets behind them in which the catch is trapped by the forward movement of the boat. Depending on the desired catch, trawlers tow the nets in very shallow waters up to a depth of about 6,500 feet along the seafloor. Large, offshore factory trawlers can also process their catch on board. Freezer trawlers are outfitted with a refrigerating plant and freezing equipment. Two builders, located in the Gulf of Mexico and Pacific regions, provided quotes on classification society fees and a construction bid; another builder provided an estimate of the costs associated with designing and constructing a classed vessel approximately 90 feet in length. Collectively, this information indicates that the additional costs could range from approximately $300,000 to $1.2 million above the total construction cost of a vessel not built to these standards. In general, vessel builders, owners, naval architects, marine safety experts, academics, and other experts we spoke with provided widely varying estimates on the impact that classification may have on vessel construction costs, though many suggested a range of 10 to 30 percent. In contrast, representatives from one classification society stated that shipbuilders who currently build other ships to classification requirements have stated estimates of 2.5 to 5 percent in overall construction costs would be needed to construct a classed fishing vessel. We could not, however, independently assess the accuracy of these claims. With regard to classification society fees, classification society representatives stated that the fees they charge for vessel design approval and surveys conducted during the construction of a classed commercial fishing vessel vary depending on the complexity of the vessel’s design, as well as the builder’s level of expertise in constructing classed vessels. These fees typically account for 1.0 to 1.5 percent of the costs to design and construct a classed vessel. A builder on the West Coast provided us a quote from one of the classification societies of approximately $136,000 for design reviews and construction surveys for a $2 million, 58-foot commercial fishing vessel, or about 7 percent of the vessel’s total construction costs. Another builder in the Gulf of Mexico stated that constructing a 90-foot commercial fishing vessel generally costs him approximately $2.3 million, but constructing the same vessel with classification requirements would incur approximately $195,000 in additional classification fees, about 8 percent of construction costs. A vessel owner who owns and operates two catcher vessels off the coast of Alaska and is currently constructing a 300-foot factory processing vessel estimated that classification fees for vessel design and construction would likely amount to $300,000—approximately 0.4 percent—of the vessel’s $70 million total purchase price. These fees included an initial review of the vessel’s design and, generally, the review of one set of drawing revisions. If a builder needs to resubmit the vessel’s design to the classification society for another review, each submission could be subject to additional fees. Representatives from both ABS and DNV GL explained that the fees they charge do not account for additional design and oversight services that might be necessary during the construction process, especially if this is the first time that the vessel builder has constructed a classed vessel. Vessel owners and builders told us that other costs associated with constructing a classed commercial fishing vessel include the use of certain materials, such as steel, and key equipment, such as generators and the engine, which may be more costly to purchase from the manufacturer since the items must be certified by the classification society. As part of classing, surveyors from classification societies are required to certify the fabrication and/or assembly of certain materials and key equipment prior to installation on the vessel. For example, two individuals—a vessel owner and someone with years of experience working in the commercial fishing industry—provided documentation that showed that two types of class certified equipment—generators and engines—cost approximately 6 to 16 percent more than the same, non- certified equipment. DNV GL representatives estimated that, in total, the class-certified materials and key equipment can cost an additional $20,000-$30,000 more than the cost of non-certified equipment. Vessel owners we interviewed stated that they may incur additional costs to maintain a classed commercial fishing vessel over the vessel’s lifetime. These costs include fees paid to classification society surveyors to conduct annual surveys—required as part of regular class maintenance— as well as periodic surveys—more extensive surveys generally required every 5 years—on the vessels. Representatives from one classification society estimated that, depending on size, age, and condition, the fees for fishing vessel annual surveys can range between $1,500 and $5,000, while the fees for periodic surveys can range between $6,000 and $25,000. Classification society representatives stated that the high end of the fees for periodic surveys are influenced by the fact that many owners choose to perform major maintenance, upgrades, and modifications at the same time, which increases the overall survey items and, therefore, the cost. Owners we interviewed stated that in addition to these annual survey fees, they are required to pay for the surveyor’s travel costs as well as any necessary repairs the surveyor identifies. Those vessel owners we interviewed estimated that the annual maintenance costs for a classed commercial fishing vessel—including fees, travel costs, and repairs—could range from $28,000 to as much as $150,000. For example, an invoice we received from one vessel owner totaled over $70,000. More than one-third of the total cost was due to fees for periodic, annual, and equipment surveys. The majority of the remaining costs were associated with the purchase and installation of new machinery and repairs made to the vessel, as well as travel expenses paid to the classification society. Vessel owners we interviewed, or received correspondence from, provided examples of potential challenges that arise when maintaining classed vessels, such as annual surveys being scheduled at a time or location that interferes with fishing operations; the unavailability of classification surveyors at a convenient location; and the time to obtain classed materials or equipment to be delivered before an emergency repair can be completed. One owner noted that he once waited 2 weeks and paid three times more to replace three square feet of classification society-certified steel. However, ABS representatives stated that vessel owners have a 6-month window to meet their annual survey requirement, and stated that ABS generally has two surveyors working in Alaska at any given time and the society is open to adding more surveyors in Alaska as needed. Similarly, DNV GL representatives stated that to mitigate the cost and time associated with surveyors’ travel, the society has begun to use networked or stand-alone electronic devices to record certain non-major classing inspections. Several industry representatives noted that some of the additional costs associated with constructing and maintaining classed vessels may be partially offset by decreased insurance premium costs and improved vessel resale value for vessel owners. Coast Guard officials we interviewed similarly noted that classed vessels may command a higher resale price. However, marine insurance underwriters we interviewed stated that prior claim history—not classification—is the key factor that influences insurance premiums for commercial fishing vessels. One of the underwriters added that owners of classed commercial fishing vessels might actually pay higher insurance premiums than owners of non- classed vessels because hull and machinery claims for classed vessels would likely be more expensive to repair. With regard to whether a classed commercial fishing vessel has a higher resale value, we spoke to some vessel owners who stated that the maintenance costs associated with owning a classed vessel would actually deter them from purchasing an existing classed vessel. Commercial Fishing Stakeholders’ Views on the Potential Impact of Classing Many of the stakeholders we spoke with told us that classing and its associated costs have and will continue to change aspects of the commercial fishing business, including profitability and construction of new vessels. Several stakeholders stated that their ability to absorb the additional costs due to classing is dependent on the relative health of the fishing businesses involved. Vessel owners we interviewed in less profitable regions and fisheries, such as the shrimp fishery in the Gulf of Mexico and the groundfish fishery in the North Atlantic, believed that their businesses will be adversely impacted by the increased construction costs associated with classing. One vessel owner, whose small-scale commercial fishing operation in the Gulf of Mexico employs approximately 40 individuals and operates 3 vessels, estimated that constructing a vessel to meet classification society standards would increase overall construction costs by 30 percent, an amount she believes that she cannot absorb as shrimp prices are sensitive to the international market. While vessel owners in more profitable regions and fisheries believed that their businesses could absorb the increased construction costs associated with classing, one owner whose family has fishing operations in 10 different fisheries, some of which are profitable and some that are less so, noted that the addition of a newly constructed classed vessel to his fleet—which he estimated cost about 35 percent more due to classing requirements—was still a sound business decision on his part since the vessel will operate in the more profitable North Pacific fishery. However, he added that his family would not incur similar costs to construct a new classed vessel to operate in the scallop industry, in which they also have business operations. Another issue that arose in our discussions with stakeholders was that the perception of the increased cost associated with constructing a classed commercial fishing vessel—regardless of what the actual cost increase may be—appears to be affecting vessel owners’ decisions to purchase new vessels. Among the 13 vessel builders we interviewed, 9 builders stated that classification requirements and their perceived costs have contributed to a significant reduction in orders for new commercial fishing vessels, regardless of vessel length. One builder noted that he reduced the number of employees from nearly 100 to less than 50 workers and began constructing other vessels, such as tug boats, in addition to commercial fishing vessels to keep his remaining employees employed. One industry representative stated that owners, especially those with smaller operations in less profitable fisheries, may find it cost prohibitive to recapitalize their vessel or fleet. Similarly, vessel owners stated that they will likely choose to continue operating their aging vessels or choose to close their business in lieu of purchasing new classed vessels. Other vessel owners stated that they would either consider, or already have chosen, to purchase and update an older commercial fishing vessel instead of constructing a new classed vessel. For example, one vessel owner we interviewed, whose family has fished commercially along the Gulf of Mexico for 150 years, stated that the new classing requirements for commercial fishing vessels have resulted in several businesses rebuilding older vessels, where a new vessel is constructed around the original keel of an older vessel that is not subject to classing requirements. Another vessel owner we interviewed, whose family also has a history in commercial fishing, told us that he and other members of his family would like to build several new vessels to add to their already sizable fleet, but have decided not to do so because of the perceived costs associated with the classing process. Instead, this vessel owner commented that some members of his family recently purchased two wrecked commercial fishing vessels and intend to construct a new vessel using the wrecked vessel’s 40-year-old keel. Industry trade representatives also voiced concerns that when owners choose to recapitalize their vessels, classing requirements could encourage owners to purchase smaller vessels to avoid classification requirements. For example, one builder we interviewed offers a design for a 45 to 49 foot crab vessel, which, because of its size, would not be subject to classification requirements. The builder explained that the vessel would be shorter than other vessels operating in the Bering Sea and could be less safe for the crew onboard in the event of an accident. Further, naval architects we interviewed stated that they know of vessel owners who have begun to seek new commercial fishing vessels less than 50 feet in length. Classification Can Contribute Safety Benefits, but Other Factors and Measures Also Play a Significant Role Federal agency officials tasked with overseeing the commercial fishing industry, as well as industry representatives, academics, builders, and owners we interviewed, agreed that classing provides some benefits and could contribute to overall vessel safety by providing independent and ongoing oversight to ensure quality and seaworthiness during the design and construction of the vessel, as well as through annual maintenance surveys. At the same time, however, vessel owners we interviewed noted that overall vessel safety can also be improved by instituting other safety measures or design approaches. As shown in figure 6, classification addresses vessel design, construction, and maintenance, but training, safety and lifesaving equipment, environmental, and other factors also contribute to commercial fishing vessel safety. As one industry trade representative explained, classing commercial fishing vessels is another approach for improving industry safety by ensuring key systems aboard the vessel are in good, working order, thereby potentially breaking the chain of events leading to a major catastrophe at sea, such as a vessel sinking. According to a representative for a larger commercial fishing company, vessel owners benefit from the oversight provided by classification society surveyors during the construction process. Classification society surveyors provide another set of eyes and the perspective of a third party. An owner of a large commercial fishing business stated that vessel owners who do not maintain their classed vessels, and thereby jeopardize the lives of their crew, risk losing their vessel’s classification certificate, which, in turn, will prevent them from operating the vessel legally. Overall, commercial fishing industry representatives supported the requirement that commercial fishing vessels with factory processors onboard be classed because of the risks these vessel owners face with such a large number of factory workers—who are not mariners—working onboard. Most vessel owners that we interviewed or received written documentation from, however, did not support classification for smaller commercial fishing vessels—especially those operated by individual owners with small crews. To illustrate that different factors contribute to commercial fishing vessel safety, we collected data on fishing vessel accident claims from two U.S.- based marine insurance underwriters that insure commercial fishing vessels. While our findings are not generalizable to all insurance claims made between 2013 and 2016, we found that protection and indemnity claims, which cover liability for bodily injury and third-party damage— accounted for nearly two-thirds of insurance claims for these two underwriting companies. Hull and material claims also comprised a significant number of overall insurance claims over the period. These claims can be made as a result of physical loss of or damage to the vessel, including equipment, engines, and machinery. Figure 7 shows the number and types of claims for 2013 through 2016 from two marine underwriting companies we interviewed. One vessel owner we interviewed stressed the importance of safety training so crew members are capable of using lifesaving equipment when it is needed. She referred us to a Coast Guard analysis of fishing vessel casualties occurring from 1992 to 2010 that found fatalities from water exposure might have been prevented if personal floatation devices or survival suits had been used. In its analysis, the Coast Guard found that 32 percent of all fatalities between 1992 and 2010 resulted from crew falling overboard, being pulled overboard by equipment, or diving from the vessel. Other vessel owners who operate in the Gulf of Mexico stressed several safety measures, such as: requiring vessel crew members to undergo routine drug testing; requiring vessel crew members to wear personal floatation devices when working on deck; requiring all commercial fishing vessels that use a winch to hoist catch from the ocean to install either a guard or emergency shut-off mechanism; and mandating skills-based training and testing of safety procedures for each vessel crewmember, not just the individual in charge of the vessel, as the law currently requires. Commercial fishing industry representatives and vessel owners we interviewed also stated that stability assessments and load line assignments—which are required for fishing vessels built after July 1, 2013, that are 79 feet or longer—may provide safety benefits comparable to classification. A load line indicates the point where the waterline should reach when a vessel is properly loaded. As part of a load-line certification, a vessel’s seaworthiness is assessed, which involves the completion of stability documentation, providing the operator with instructions for safely loading and operating the vessel. Load line requirements cover some of the same items as classification rules, such as pre-construction review and approval of plans by the assigning authority, weathertight and watertight integrity, and periodic inspections to verify proper maintenance and ensure that modifications to the vessel do not compromise seaworthiness. Alternative-to-Class Approach Offers Benefits Relative to Classification, but Key Elements Remain Open to Interpretation The alternative-to-class approach provides some flexibility and potential cost savings to vessel owners compared to classification, but we did not identify a builder who has constructed a vessel using this approach. The Coast Guard has not issued regulations or guidance to clarify how the alternative-to-class approach will be implemented, which increases uncertainty on how key steps in the process should be conducted. The Coast Guard Authorization Act of 2015 created an alternative-to- class approach for vessels at least 50 feet and not more than 79 feet in length built after February 8, 2016. Under the alternative-to-class approach, a commercial fishing vessel is designed to standards equivalent to classification society standards. For example, the alternative-to-class approach requires a stability assessment and an assigned loading mark (or load line) certification that construction is in accordance with design, and written stability and loading instructions that are provided to the owner or operator to ensure a robust hull and weathertight and watertight integrity. As such, the structural strength of the vessel’s hull, reliability and function of major systems—including propulsion and steering—and watertight integrity of the vessel are expected to be comparable to a classed vessel. However, the alternative- to-class approach provides some flexibility to builders and owners in how to do so, as shown in figure 8. The alternative-to-class approach provides additional flexibilities to builders and owners and potentially reduces compliance costs compared to classing a new vessel. Examples of the flexibilities and potential drawbacks the alternative-to-class approach offers include the following. It enables a marine surveyor of an organization accepted by the Secretary of Homeland Security, rather than a classification society representative, to verify that the vessel’s construction meets design requirements and to conduct inspections. Coast Guard officials told us that such individuals need to be licensed by an organization, such as the Society of Accredited Marine Surveyors or the National Association of Marine Surveyors, to be deemed qualified by the Coast Guard. It reduces inspection requirement from annually to at least twice every 5 years, and according to Coast Guard officials, the alternative-to- class approach does not impose requirements for disassembly and inspection of propulsion machinery, generators, electrical systems, pumps, and piping. It requires owners to maintain records to demonstrate compliance with the alternative-to-class approach, which may be burdensome for some vessel owners. However, our interviews with commercial fishing stakeholders and our analysis raised several questions as to how certain aspects of the alternative-to-class approach will be implemented. For example, stakeholders raised a number of questions about state licensing requirements for naval engineers and architects, including whether licenses issued in one state would be recognized by other states. One naval engineer in the North Pacific told us that he had to secure an engineering license to do work for a client in another state, despite holding the same license in his home state. Coast Guard officials did not believe that differences in state licensing requirements should be an issue. Coast Guard officials explained that although each state may have different licensing requirements, one professional society sets the technical standards for professional engineers and that these common standards apply across all states. Despite this, it is not certain if individual states will recognize other states’ engineering licenses. Table 3 highlights our analysis of the key issues raised by industry stakeholders during the course of our review. Although Coast Guard officials believe that the legislation clearly outlined the requirements for this approach, numerous open questions exist regarding implementation of the alternative-to-class approach, as depicted in table 3. The Coast Guard has not yet issued regulations or guidance concerning the alternative-to-class approach. Coast Guard officials noted they are still in the process of developing a final rule to implement earlier legislation, including the Coast Guard Authorization Act of 2010, as amended by the Coast Guard and Maritime Transportation Act of 2012. At the time of our review, Coast Guard officials acknowledged they were uncertain when this rule would be finalized. These officials stated that any effort to promulgate rules for the 2016 alternative-to-class approach will not start until after the final rule regarding the 2010 and 2012 acts is issued. However, Coast Guard officials noted they were considering developing a policy letter to provide some additional guidance on implementing the alternative-to-class approach, but provided no timeframe for doing so. The Coast Guard is responsible for implementing the alternative-to-class statute, but questions remain regarding how this implementation will be achieved. While the 2016 legislation did not require the Coast Guard to promulgate guidance or regulations for the alternative-to-class approach, regulations are one of the primary tools federal agencies use to implement law and policy. The general process by which federal agencies develop and issue regulations allows the public an opportunity to provide information to agencies on the potential effects of a rule or to suggest alternatives for agencies to consider prior to publication of the final rule. Federal internal control standards recommend that agency management communicate with both internal and external stakeholders the necessary quality information, such as regulations describing procedures to be followed to comply with the alternative-to-class legislation, to achieve objectives. Without specific written procedures—either in the form of regulations or guidance—the Coast Guard cannot ensure consistent implementation of the alternative-to-class approach. Conclusions Since the late 1980s, Congress had undertaken efforts to improve commercial fishing vessel safety, including establishing classification requirements for all three types of commercial fishing vessels—catchers, tenders, and processors—and, most recently, establishing an alternative- to-class approach as a less-prescriptive option for smaller vessels. Accurate data collected by the Coast Guard during incident investigations—such as the fishery in which the vessels operate— is necessary to understand which fishing vessels are involved in accidents. In addition, reliable data on the total number of commercial fishing vessels that are actively fishing and information on key vessel characteristics—including vessel age, length, and its fishery—is necessary to calculate rates and establish trend information for commercial fishing vessels involved in accidents. Without such information, Congress, the Coast Guard, and other federal agencies— such as NIOSH—will not be able to assess the factors that contribute to commercial fishing vessel accidents, injuries, and fatalities. While the costs of classification cannot be reliably measured, industry stakeholders perceive the potential costs associated with classing— regardless of what the actual costs are—as impacting the commercial fishing industry, including reduced orders for new vessels, and the continued operation of aging vessels, and the loss of income for commercial fishers. The alternative-to-class approach provides greater flexibility and potential cost savings to owners of smaller commercial fishing vessels. While not required to do so, the Coast Guard has not issued guidance or promulgated regulations to clarify aspects of the alternative-to-class approach. However, the absence of timely regulations or guidance has contributed to confusion among the commercial fishing industry and increases the risk of potentially inconsistent implementation of the alternative-to-class approach. Recommendations for Executive Action We are making a total of six recommendations, including four to the Commandant of the Coast Guard, one to the Director of NIOSH, and one to the Assistant Administrator for Fisheries for the National Marine Fisheries Service: The Coast Guard should ensure that the data it collects during commercial fishing vessel incident investigations, including the fishery in which the commercial fishing vessel is involved, is accurately captured. (Recommendation 1) The Coast Guard should form a working group with NIOSH and the National Marine Fisheries Service to determine an efficient means to establish a reliable estimate of the population of commercial fishing vessels actively fishing, landing, and selling their catch; the fishery in which a vessel operates; and key vessel characteristics including, but not limited to, vessel age and length. (Recommendation 2) Once reliable data are available, the Coast Guard, or another agency identified by the working group, should assess the rates of commercial fishing vessel accidents, injuries, and fatalities to determine whether certain factors—including vessel length and region of operation, among other things—affect these rates. (Recommendation 3) The Coast Guard should issue regulations or guidance to clarify and implement the alternative-to-class approach. (Recommendation 4) NIOSH should form a working group with the Coast Guard and the National Marine Fisheries Service to determine an efficient means to establish a reliable estimate of the population of commercial fishing vessels actively fishing, landing, and selling their catch; the fishery in which a vessel operates; and key vessel characteristics including, but not limited to, vessel age and length. (Recommendation 5) The National Marine Fisheries Service should form a working group with the Coast Guard and NIOSH to determine an efficient means to establish a reliable estimate of the population of commercial fishing vessels actively fishing, landing, and selling their catch; the fishery in which a vessel operates; and key vessel characteristics including, but not limited to, vessel age and length. (Recommendation 6) Agency Comments and Our Response We provided a draft of this product to the Departments of Homeland Security; Health and Human Services; and Commerce to respond on behalf of the Coast Guard, NIOSH, and NOAA, respectively for review and comment. The Departments of Health and Human Services and Commerce concurred with the recommendation directed to their components. The Department of Homeland Security concurred with three of the four recommendations. The departments’ written comments are reprinted in appendixes III-V, respectively, and summarized below. We also sent a draft of this product to NTSB for their review and comment. The departments and NTSB also provided technical comments, which we incorporated as appropriate. The Department of Homeland Security concurred with our recommendation to ensure that the data the Coast Guard collects during commercial fishing incident investigations, including the fishery in which the vessel is involved, is accurately captured. It noted that the Coast Guard will reemphasize the need to collect fishery data as part of its training programs and qualification requirements of its investigators. Additionally, it stated that the Coast Guard will consider adding additional data fields within its Marine Information for Safety and Law Enforcement database to improve the accuracy of the data collected. The Departments of Homeland Security, Health and Human Services, and Commerce concurred with our recommendations directed to them to form a working group to establish a reliable estimate of the population of commercial fishing vessels, the fishery in which the vessel operates, and key vessel characteristics. The Department of Homeland Security noted that neither the Coast Guard nor the National Marine Fisheries Service have access to data for fisheries within economic zones managed by the states. As such, the Department of Homeland Security recommended that the (1) working group be established at the regional level and (2) regional fisheries management councils coordinate with individual states to collect needed data and, in turn, provide that data to the Coast Guard and the National Marine Fisheries Service. Additionally, the Department of Health and Human Services stated that NIOSH will assist in identifying ways to establish comprehensive vessel counts, which could include engaging state agencies. The agencies’ comments reflect the complexity of and need to capture reliable data of the size and characteristics of the commercial fishing vessel fleet. Determining the working group’s membership, structure, roles and responsibilities is an essential first step to doing so. Regardless of the working group’s structure, it will be important to ensure that the data collected is done in a manner that allows it to be aggregated and analyzed in various ways, including at the national level. The Department of Homeland Security did not concur with our recommendation that the Coast Guard assess the rates of commercial fishing vessel accidents, injuries, and fatalities to determine whether certain factors—such as vessel length and region of operation—affect these rates. The Coast Guard stated that it had limited resources and capabilities to conduct such assessments and noted that NIOSH studies marine incidents to identify causal factors in fishing vessel casualties, which could more effectively determine casualty rates. We agree that NIOSH has, and can, play an important role in identifying commercial fishing fatalities and regional risk factors, but such assessments typically focus on fatalities in specific fisheries, and generally did not consider such factors as vessel length or whether the vessel has been classed. Further, the Coast Guard is the agency responsible for developing and enforcing regulations related to commercial fishing vessel safety, including classification requirements and the alternative-to-class approach. As such, the Coast Guard’s office of Investigations and Casualty Analysis leads the agency’s investigation program to promote safety, protect the environment, and prevent future accidents. As part of its efforts, this office has previously analyzed data on commercial fishing vessel accidents. While we continue to believe that our recommendation is appropriately targeted to the Coast Guard, we acknowledge that the working group could determine that another appropriate agency other than the Coast Guard is better positioned to conduct this analysis. As such, we have revised our recommendation to provide more flexibility to the agencies in determining how best to meet the intent of our recommendation. The Department of Homeland Security concurred with our recommendation that the Coast Guard issue regulations or guidance to clarify and implement the alternative-to-class approach. It noted that the Coast Guard is in the process of developing a more formal policy on best practices and expectations of the industry and implementing guidelines consistent with the intent of the legislation, which it hopes to complete by December 31, 2018. We also provided a draft of this report to the three classification societies we included in our review—ABS, DNV GL, and RINA—for their review and comment. ABS and DNV GL provided technical comments, which we incorporated as appropriate. We are sending copies of the report to the appropriate congressional committees. We are also sending a copy to the Secretary of Homeland Security, the Secretary of Health and Human Services, the Chairman of the National Transportation Safety Board, the Secretary of Commerce, and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. Should you or your staff have questions, please contact me at (202) 512- 4841 or dinapolit@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Appendix I: Selected Countries Have Varying Requirements for Classing Commercial Fishing Vessels Other selected countries that, like the United States, are members of the Organization for Economic Cooperation and Development and have sizeable commercial fishing industries have established requirements for designing, constructing, and—in some instances—maintaining commercial fishing vessels to classification society standards, as described in table 4. Appendix II: Objectives, Scope, and Methodology This report evaluates the costs and benefits of classing commercial fishing vessels. Specifically, we assessed (1) what is known about the numbers and rates of commercial fishing vessel accidents, injuries, and fatalities; (2) what is known about the costs to construct and maintain classed commercial fishing vessels built since July 2013 and the effects of classing on vessel builders and owners; (3) the benefits associated with classing commercial fishing vessels; and (4) how the alternative-to- class approach compares with building and maintaining commercial fishing vessels to classification society standards. To assess what is known about the numbers of commercial fishing vessel accidents, injuries, and fatalities, we collected and analyzed data from the Coast Guard’s Marine Information for Safety and Law Enforcement database on commercial fishing vessel investigations for calendar years 2006 through 2015 to identify the number of vessel accidents and/or injuries or fatalities. We also collected relevant Coast Guard data on enforcement actions and boardings. To assess the reliability of the data, we reviewed related documentation, spoke with knowledgeable agency officials, and performed electronic testing for obvious errors in accuracy and completeness. Using latitudinal and longitudinal information collected during the Coast Guard’s investigation of each commercial fishing vessel accident, we determined where the accident occurred and limited our analysis to those accidents that involved U.S. vessels and occurred between 3 nautical miles and 200 nautical miles from shore, an area that is generally referred to as U.S. federal waters. In the instances of Texas, Puerto Rico, and the Gulf coast of Florida, we used the area between 9 nautical miles and 200 nautical miles from shore, which is consistent with federal waters for those states. We found errors in the longitudinal and latitudinal data and could not match commercial fishing vessel accidents to an accurate location for 243 observations; we excluded these observations from our analysis. Overall, we determined that the data were sufficiently reliable for reporting the overall number of accidents, injuries, and fatalities over this time period. We attempted to separate the data by fishery management council region and interstate marine fisheries commission—regional partners of the National Oceanic and Atmospheric Administration (NOAA) that ensure sustainable fishery management throughout the United States—using the longitudinal and latitudinal boundaries of each region and commission. However, we found that the geographic location in which each accident occurred is not sufficiently reliable for determining the region or fishery in which a commercial fishing vessel operates. For example, the geographic location of an accident does not necessarily signify that the commercial fishing vessel was engaged in one of the fisheries managed by the regional council. In addition, according to National Marine Fisheries Service officials, the three interstate commissions work almost entirely on issues pertaining to shared fishery resources within the boundaries of their respective states and generally do not manage fishing activity in federal waters, so we could not reasonably assign an accident in federal waters to a region managed by one of these interstate commissions. We also collected and analyzed data from the National Institute for Occupational Safety and Health’s (NIOSH) Commercial Fishing Incident Database on commercial fishing fatalities for calendar years 2006 through 2015 to identify causes of commercial fishing vessel fatalities over this period. To assess the reliability of the data, we reviewed related documentation, spoke with knowledgeable NIOSH officials, and performed electronic testing for obvious errors in accuracy and completeness. We determined that the data were sufficiently reliable for the purposes of reporting the number of fatalities over time. We also examined reports from the National Transportation Safety Board (NTSB) investigations of commercial fishing vessel accidents for calendar years 2006 through 2015, which include some of the most serious accidents, to describe what NTSB identified as the probable causes of these accidents. To identify rates of commercial fishing vessel accidents over time, we requested data from the Coast Guard on the population of commercial fishing vessels that were actively catching, landing, and selling their catch. We collected Coast Guard data on the number of commercial fishing vessels from 2006 to 2015 with a valid certificate of documentation, which indicates that the vessel is registered with the Coast Guard and is greater than 5 net tons. We also contacted NIOSH and NOAA to discuss the ways, if any, that these agencies have estimated the size of the active commercial fishing vessel fleet for their studies or programs. After contacting the Coast Guard, NIOSH, and NOAA for the purpose of collecting data on the total number of active U.S. commercial fishing vessels, we determined that we could not identify sufficiently reliable data about the size of the active U.S. commercial fishing vessel fleet for 2006 through 2015 for the purposes of our analysis. These data reliability problems precluded us from calculating rates of accidents, injuries, or fatalities over this period. We interviewed officials from the Coast Guard, NIOSH, and NSTB regarding the investigations and analyses they have conducted on commercial fishing vessel accidents and recommendations they have made to improve safety on board these vessels. We also interviewed officials from NOAA’s National Marine Fisheries Service to discuss the roles and responsibilities of the regional fishery management councils and interstate marine fisheries commissions. To assess what is known about the costs to construct and maintain classed commercial fishing vessels built since July 2013 and the effects of classing on vessel builders and owners, we collected data on the costs associated with constructing and maintaining classed commercial fishing vessels from vessel builders and owners willing to share this information. Specifically, we analyzed (1) classification society design review fees quoted to two vessel builders located in the Gulf of Mexico and Pacific regions and other documentation these builders provided, including a construction bid; (2) another vessel builder’s cost estimate for constructing a 90 foot long classed commercial fishing vessel to be used in the Gulf of Mexico shrimp industry; and (3) documentation provided by one vessel owner and another individual with extensive experience in the commercial fishing industry including the cost of various engines and generators—class and non-class certified—that could be installed during the construction process. We compared the quotes for these generators and engines to determine the cost differential between class and non- class certified equipment. The findings based on these data are not generalizable, but they do provide insight into the additional costs associated with constructing a classed commercial fishing vessel. In addition, we conducted interviews and discussion sessions with stakeholders in the commercial fishing industry to obtain the perspectives of vessel owners and/or operators, vessel builders, and commercial fishing organizations. Specifically, we interviewed 13 vessel builders, and 36 vessel owners and/or operators from across the United States, including both those with large and small businesses. We also interviewed representatives from 4 commercial fishing trade organizations that represent fisheries in Alaska and the Bering Sea, the Gulf of Mexico, the Pacific Ocean, and the Mid and North Atlantic Ocean. To ensure we captured many different perspectives, we held three discussion sessions with stakeholders in the commercial fishing industry, inviting interested parties to attend, including vessel owners and builders; trade organization representatives; and naval architects, at locations across the country, including Garden Grove, California; New Orleans, Louisiana; and Seattle, Washington. In total, 39 individuals involved in the commercial fishing industry attended one or more of these discussion sessions. From the testimonial information we collected through these interviews and discussion sessions, we identified common themes, including the impact of classing on vessel builders and owners. We also interviewed representatives from the three predominant classification societies in the United States—American Bureau of Shipping (ABS), Det Norske Veritas Germanischer Lloyd (DNV GL), and RINA to discuss fees they charge as part of the classification process. We interviewed three marine underwriters who insure commercial fishing vessels off the coast of the Gulf of Mexico, Pacific Ocean, and the Atlantic Ocean to discuss how classification affects insurance premiums. To assess the benefits associated with classing commercial fishing vessels, we obtained the perspectives of vessel owners and/or operators, vessel builders, and commercial fishing trade organizations, and classification societies during the interviews and discussion sessions described above. The information obtained from interviews and discussion sessions cannot be generalized to all vessel builders, owners, or operators; however, the information provides important insights on the experiences of these groups. We also spoke with representatives from ABS, DNV GL, and RINA, as well as marine safety experts, naval architects, academics who study commercial fishing vessel safety, and marine underwriters in fishing industries off the coast of the Gulf of Mexico, the Pacific Ocean, and the Atlantic Ocean. From these interviews and discussion sessions, we identified common themes. We also reviewed Coast Guard and NIOSH studies related to improving commercial fishing vessel safety and the benefits each found with respect to classing commercial fishing vessels or improved accident outcomes. We collected data on the number of insurance claims submitted by commercial fishing vessel owners from 2013 through 2016 to two of the three marine underwriting companies we interviewed—who were willing to share this information—to determine the number of hull and machinery claims and the number of protection and indemnity claims that these companies processed over the period. The findings based on these data are not generalizable, but they illustrate the types of insurance claims made by commercial fishing vessel owners. To evaluate how the alternative-to-class approach compares with building and maintaining commercial fishing vessels to classification society standards, we collected and reviewed relevant statutes, documentation of Coast Guard rulemaking efforts, regulations, policies and guidance, as well as classification society rules and standards. We compared the requirements of the alternative-to-class approach with the steps associated with classification to determine the similarities of both approaches. We interviewed cognizant officials from the Coast Guard to discuss the current policies and regulations in place to address commercial fishing vessels and how an alternative-to-class approach will be implemented. We also interviewed representatives from classification societies—including DNV GL, ABS, and RINA—and commercial fishing vessel owners and operators, naval architects, builders and marine underwriters to discuss both approaches. We also collected information on commercial fishing vessel classification requirements from a non- generalizable sample of comparison countries that, like the United States, are members of the Organization for Economic Cooperation and Development and have sizeable fishing industries. Specifically, we selected Canada, Denmark, Spain, and the United Kingdom, represented among countries with the largest fishing harvests over 2010-2014, according to country data reported by the United Nations’ Food and Agriculture Organization. We collected and reviewed documentation of relevant requirements for the United States and each selected country and discussed the requirements with officials from the selected countries. We present this analysis in appendix I. We conducted this performance audit from June 2016 to December 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: Comments from the Department of Homeland Security Appendix IV: Comments from the Department of Commerce Appendix V: Comments from the Department of Health and Human Services Appendix VI: GAO Contacts and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact above, Diana Moldafsky, Assistant Director; Laura Jezewski; Pedro Almoguera; Deanna Burns; Lorraine Ettaro; Danielle Giese; Laura Greifner; Kristine Hassinger; Ramzi Nemo; LeAnna Parkey; Erin Stockdale; Robin Wilson; and Ellen Wolfe made key contributions to this report.
Why GAO Did This Study Commercial fishing has one of the highest death rates of any industry in the United States. Fishing vessels that are at least 50 feet long and were built after 2013 are required by law to be built and maintained to rules developed by a classification society, a process known as classing. Congress created an alternative-to-class approach in 2016, allowing certain size vessels to be designed and built to equivalent standards in lieu of classing. The Coast Guard Authorization Act of 2015 included a provision for GAO to review the costs and benefits of classing commercial fishing vessels. This report assesses (1) known numbers and rates of commercial fishing vessel accidents, injuries, and fatalities; (2) what is known about the costs, effects, and benefits of constructing and maintaining classed vessels; and (3) how the alternative-to-class approach compares with classing. GAO collected data on vessel accidents, injuries, and fatalities; interviewed vessel owners, builders, classification societies, Coast Guard, and other agencies; and studied classing costs. What GAO Found The Coast Guard, the only military service within the Department of Homeland Security (DHS), investigated 2,101 commercial fishing vessel accidents between 2006 and 2015 that occurred in federal waters; however, because there are no reliable data on the total number of commercial fishing vessels that are actively fishing, rates of accidents, injuries, and fatalities cannot be determined. Agencies, such as the Coast Guard, keep records of accidents, but without reliable data on active vessels, trend information cannot be determined. The Coast Guard and the National Marine Fisheries Service have separate efforts to collect data that could be used to develop an estimate of active commercial fishing vessels, but each agency is taking a different approach to do so. These and other agencies agreed that it is important to calculate rates to assess commercial fishing vessel accidents, injuries, and fatalities. Establishing a mechanism—such as a working group—to coordinate efforts and collect reliable data on the number of active vessels and key characteristics, such as vessel age and length, would allow the agencies to do so in an efficient manner. While data on the costs to design, construct, and maintain classed vessels are limited, vessel owners, builders, and classification societies agree that classification increases costs and told GAO that the perceived costs of classing may affect vessel owners' decisions to purchase new vessels to avoid classification requirements. However, they also agree that classification is one of many factors that contribute to safety. The alternative-to-class approach is more flexible than classing—for example, in its use of marine surveyors to verify vessel construction. Industry stakeholders and GAO's analysis, however, identified numerous questions and uncertainties regarding implementation of the approach, including licensing requirements for naval engineers and architects. The Coast Guard has not issued regulations or guidance to address these issues on the alternative-to-class approach due, in part, to its ongoing efforts to issue regulations to implement safety-related legislation enacted in 2010 and 2012. However, without specific written procedures—either in the form of regulations or guidance—the Coast Guard cannot ensure consistent implementation of the alternative-to-class approach. What GAO Recommends Among GAO's recommendations, the Coast Guard and other agencies should form a working group to collect reliable data on the number of active fishing vessels. The Coast Guard should also issue regulations or guidance to address questions about the alternative-to-class approach. The agencies generally concurred with the recommendations, but DHS did not concur that the Coast Guard assess vessel accident rates. GAO revised this recommendation to allow the Coast Guard or another appropriate agency to do the assessment.
gao_GAO-17-802T
gao_GAO-17-802T_0
FCC Has Not Evaluated Lifeline’s Performance in Meeting Program Goals but Has Taken Recent Steps toward Evaluation FCC has not evaluated Lifeline’s performance in meeting program goals but, as we found in May 2017, has taken recent steps toward evaluation. According to GAO’s Cost Estimating and Assessment Guide, to use public funds effectively the government must meet the demands of today’s changing world by employing effective management practices and processes, including the measurement of government program performance. In the past, FCC has called for program evaluations to review the administration of universal service generally, including Lifeline, but has not completed such evaluations. For example, FCC specified that it would review USAC 1 year after USAC was appointed as the permanent administrator to determine whether the universal service programs were being administered effectively. This review, which was planned to have been completed by 1999, was never done. In 2005, FCC awarded a contract to the National Academy of Public Administration to study the administration of the USF programs generally, examine the tradeoffs of continuing with the current structure, and identify ways to improve the oversight and operation of universal service programs. However, we reported in May 2017 that FCC officials stated FCC subsequently terminated the contract and the study was not conducted. In March 2015, we found that FCC had not evaluated Lifeline’s effectiveness in achieving its performance goals of ensuring the availability of voice service for low-income Americans, while minimizing the burden on those who contribute to the USF. We recommended, and FCC agreed, to conduct a program evaluation to determine the extent to which Lifeline is efficiently and effectively reaching its performance goals. Our May 2017 report raised additional questions about Lifeline’s effectiveness in meeting its program goals. For example, we reported that: FCC did not know how many of the 12.3 million households receiving Lifeline as of December 2016 also have non-Lifeline phone service (for which they pay out of pocket) along with their Lifeline benefit. Without knowing whether participants are using Lifeline as a primary or secondary phone service, we concluded that it is difficult for FCC to determine whether it is achieving the program’s goal of increasing telephone subscribership among low-income consumers while minimizing the USF contribution burden. FCC revamped Lifeline in March 2016 to focus on broadband adoption and generally phase out phone service, in part because FCC recognized that most eligible consumers have phones without Lifeline and to also close the “digital divide” of broadband adoption between low-income households and the rest of the country. However, broadband adoption rates have steadily increased for the low-income population absent a Lifeline subsidy for broadband. We found that at least two companies operating in a total of at least 21 states had begun offering in-home non-Lifeline broadband wireline support for less than $10 per month to individuals that participate in public- assistance programs, such as SNAP or public housing. The offered rate of these providers’ own low-income broadband service of $10 per month was less expensive than FCC’s broadband reasonable- comparability cost benchmark of approximately $55 per month, which Lifeline subscribers would be paying for a similar level of service. Our May 2017 report also found that FCC has recently taken some steps toward evaluating Lifeline’s performance in meeting program goals. Specifically, in the 2016 Lifeline Modernization Order, FCC instructed USAC to hire an outside, independent, third-party evaluator to complete a program evaluation of Lifeline’s design, function, and administration. The order stipulated the outside evaluator must complete the evaluation and USAC must submit the findings to FCC by December 2020. As FCC expects Lifeline enrollment to increase as the program is expanded to include broadband service, this expansion could carry with it increased risks for fraud, waste, and abuse, as was the case with past expansions of the program. Completing the program evaluation as planned, and as we recommended in 2015, would help FCC determine whether Lifeline is meeting its stated goals of increasing telephone and broadband subscribership among low-income consumers, while minimizing the burden on those who contribute to the USF. Financial Controls Exist, with Others Planned, for the Lifeline Program, but Weaknesses Remain In our May 2017 report we found that FCC and USAC have established financial controls for Lifeline, including obtaining and reviewing information about billing, collecting, and disbursing funds. They have also developed plans to establish other controls, such as establishing a national eligibility verifier (National Verifier) for Lifeline providers to determine the eligibility of applicants seeking Lifeline service. However, as discussed in our May 2017 report, we found that weaknesses remain, including the lack of requirements to effectively control program expenditures above approved levels, concerns about the transparency of fees on customers’ telephone bills, and a lack of FCC guidance that could result in Lifeline and other providers paying inconsistent USF contributions. To address these concerns, we recommended the Chairman of FCC (1) require Commissioners to review and approve, as appropriate, spending above the budget in a timely manner; (2) require a review of customer bills as part of the contribution audit to include an assessment of whether the charges, including USF fees, meet FCC Truth-in-billing rules with regard to labeling, so customer bills are transparent, and appropriately labeled and described, to help consumers detect and prevent unauthorized changes; and (3) respond to USAC requests for guidance and address pending requests concerning USF contribution requirements to ensure the contribution factor is based on complete information and that USF pass-through charges are equitable. FCC generally agreed with those recommendations. In addition, we found that USAC’s banking practices for the USF result in oversight and accountability risks that FCC has plans to mitigate. Specifically, FCC maintains USF funds—whose net assets as of September 2016 exceeded $9 billion—outside of the U.S. Treasury pursuant to Office of Management and Budget (OMB) advice provided in April 2000. OMB had concluded that the USF does not constitute public money subject to the Miscellaneous Receipts Statute, 31 U.S.C. § 3302, a statute that requires that money received for the use of the United States be deposited in the Treasury unless otherwise authorized by law. As such, USF balances are held in a private bank account. However, subsequent to this OMB advice, in February 2005 we reported that FCC should reconsider this determination in light of the status of universal service monies as federal funds. As discussed in our May report, according to correspondence we received from the FCC Chairman’s Senior Legal Counsel, as of March 2017, FCC had decided to move the funds to the Treasury. FCC identified potential benefits of moving the funds to the Treasury. For example, FCC explained that having the funds in the Treasury would provide USAC with better tools for fiscal management of the funds, including access to real- time data and more accurate and transparent data. According to FCC, until the USF is moved into the Treasury, there are also some oversight risks associated with holding the fund in a private account. For example, the contract governing the account does not provide FCC with authority to direct bank activities with respect to the funds in the event USAC ceases to be the administrator of the USF. After we raised this matter with FCC officials during the course of our review, beginning in November 2016, FCC sought to amend the contract between USAC and the bank to enable the bank to act on FCC instructions independently of USAC in the event USAC ceases to be the administrator. However, as of May 2017, the amended contract had not yet been signed. While FCC has put in place a preliminary plan to move the USF funds to the Treasury, as well as plans to amend the existing contract with the bank as an interim measure, several years have passed since this issue was brought to FCC’s attention without corrective actions being implemented. Further, under FCC’s preliminary plan, it would not be until next year, at the earliest, that the funds would be moved to the Treasury. In May 2017, while reviewing a draft of this report, a senior FCC official informed us that FCC experienced some challenges associated with moving the funds to the Treasury, such as coordinating across the various entities involved, which raised some questions as to when and perhaps whether the funds would be moved. Until FCC finalizes and implements its plan and moves the USF funds, the risks that FCC identified will persist and the benefits of having the funds in the Treasury will not be realized. As a result, in our May 2017 report, we recommended that the Chairman of FCC take action to ensure that the preliminary plans to transfer the USF funds from the private bank to the Treasury are finalized and implemented as expeditiously as possible. FCC agreed with this recommendation. FCC and USAC Have Implemented Some Controls to Improve Subscriber Eligibility Verification, but Weaknesses Remain FCC and USAC have implemented controls to improve subscriber eligibility verification, such as implementing the NLAD database in 2014, which helps carriers identify and resolve duplicate claims for Lifeline- supported services. However, as discussed in our May 2017 report, our analysis of data from 2014, as well as our undercover attempts to obtain Lifeline service, revealed significant weaknesses in subscriber eligibility verification. Lifeline providers are generally responsible for verifying the eligibility of potential subscribers, but we found that their ability to do so is hindered by a lack of access to, or awareness of, state eligibility databases that can be used to confirm eligibility prior to enrollment. For example, not all states have databases that Lifeline providers can use to confirm eligibility and some providers with whom we spoke were unaware of databases that were potentially available to them. These challenges might be overcome if FCC establishes a National Verifier, as it plans to do nationwide by the end of 2019, to remove responsibility for verifying eligibility from the providers. Additionally, since USAC was not maintaining and providing information to providers about these databases, we recommended they maintain and disseminate an updated list of state eligibility databases available to Lifeline providers that includes the qualifying programs those databases access to confirm eligibility, to help ensure Lifeline providers are aware of state eligibility databases and USAC audits of Lifeline providers can verify that available state databases are being utilized to verify subscriber eligibility. FCC agreed with the recommendation. For our May 2017 report, to identify Lifeline subscribers who were potentially ineligible to participate in the program, we tested the eligibility of subscribers who claimed participation in Medicaid, SNAP, and Supplemental Security Income (SSI) using NLAD data as of November 2014. We focused our analysis on these three programs because FCC reported in 2012 that these were the three qualifying programs through which most subscribers qualify for Lifeline. We compared approximately 3.4 million subscribers who, according to information entered in NLAD, were eligible for Lifeline due to enrollment in one of these three programs to eligibility data for these programs. On the basis of our analysis of NLAD and public-assistance data, we could not confirm that a substantial portion of selected Lifeline beneficiaries were enrolled in the Medicaid, SNAP, and SSI programs, even though, according to the data, they qualified for Lifeline by stating on their applications that they participated in one of these programs. In total, we were unable to confirm whether 1,234,929 subscribers out of the 3,474,672 who we reviewed, or about 36 percent, participated in the qualifying benefit programs they stated on their Lifeline enrollment applications or were recorded as such by Lifeline providers. If providers claimed and received reimbursement for each of the 1.2 million subscribers, then the subsidy amount associated with these individuals equals $11.4 million per month, or $137 million annually, at the current subsidy rate of $9.25 per subscriber. Because Lifeline disbursements are based on providers’ reimbursement claims, not the number of subscribers a provider has in NLAD, our analysis of NLAD data could not confirm actual disbursements associated with these individuals. Given that our review was limited to those enrolled in SNAP or Medicaid in selected case-study states, and SSI in states that participated in NLAD at the time of our analysis, our data results are likely understated compared to the entire population of Lifeline subscribers. These results indicate that potential improper payments have occurred and have gone undetected. We plan to refer potentially ineligible subscribers identified through our analysis for appropriate action as warranted. Our undercover testing, as discussed in our May 2017 report, also found that Lifeline may be vulnerable to ineligible subscribers obtaining service and the testing found examples of Lifeline providers being nonresponsive, or providing inaccurate information. To conduct our 21 tests, we contacted 19 separate providers to apply for Lifeline service. We applied using documentation fictitiously stating that we were enrolled in an eligible public-assistance program or met the Lifeline income requirements. We were approved to receive Lifeline services by 12 of the 19 Lifeline providers using fictitious eligibility documentation. We also experienced instances during our undercover tests where our calls to providers were disconnected, and where Lifeline provider representatives transmitted erroneous information, or were unable to provide assistance on questions about the status of our application. For example, one Lifeline provider told us that our application was not accepted by the company because our signature had eraser marks; however our application had been submitted via an electronic form on the provider’s website and was not physically signed. While our tests are illustrative and not representative of all Lifeline providers or applications submitted, these results suggest that Lifeline providers do not always properly verify eligibility and that applicants may potentially encounter similar difficulties when applying for Lifeline benefits. As described above, these challenges might be overcome if FCC establishes a National Verifier, as it plans to do nationwide by the end of 2019, to remove responsibility for verifying eligibility from the providers. FCC and USAC Have Taken Some Steps to Improve Oversight of Lifeline Providers, but Remaining Gaps Could Allow Noncompliance with Program Rules FCC and USAC have implemented some mechanisms to enhance oversight of Lifeline providers, as discussed in our May 2017 report, but we found that remaining gaps could allow noncompliance with program rules. For example, in July 2014, FCC took additional measures to combat fraud, waste, and abuse by creating a strike force to investigate violations of USF program rules and laws. According to FCC, the creation of the strike force is part of the agency’s commitment to stopping fraud, waste, and abuse and policing the integrity of USF programs and funds. Similarly, in June 2015, FCC adopted a rule requiring Lifeline providers to retain eligibility documentation used to qualify consumers for Lifeline support to improve the auditability and enforcement of FCC rules. However, we found FCC and USAC have limited oversight of Lifeline provider operations and the internal controls used to manage those operations. The current structure of the program relied throughout 2015 and 2016 on over 2,000 Eligible Telecommunication Carriers (ETC) to provide Lifeline service to eligible beneficiaries. These companies are relied on to not only provide telephone service, but also to create Lifeline applications, train employees and subcontractors, and make eligibility determinations for millions of applicants. USAC’s reliance on Lifeline providers to determine eligibility and subsequently submit accurate and factual invoices is a significant risk for allowing potentially improper payments to occur, and under current reporting guidelines these occurrences would likely go undetected and unreported. Federal internal control standards state that management retains responsibility for the performance and processes assigned to service organizations performing operational functions. Consistent with internal control standards, FCC and USAC would need to understand the extent to which a sample of these internal controls are designed and implemented effectively to ensure these controls are sufficient to address program risks and achieve the program’s objectives. We identified key Lifeline functions for which FCC and USAC had limited visibility. For example, we found instances of Lifeline providers utilizing domestic or foreign-operated call centers for Lifeline enrollment. When we asked FCC officials about Lifeline providers that outsource program functions to call centers, including those overseas, they told us that such information is not tracked by FCC or USAC. With no visibility over these call centers, FCC and USAC do not have a way to verify whether such call centers comply with Lifeline rules. FCC and USAC have limited knowledge about potentially adverse incentives that providers might offer employees to enroll subscribers. For example, some Lifeline providers pay commissions to third-party agents to enroll subscribers, creating a financial incentive to enroll as many subscribers as possible. Companies responsible for distributing Lifeline phones and service that use incentives for employees to enroll subscribers for monetary benefit increase the possibility of fictitious or ineligible individuals being enrolled into Lifeline. Highlighting the extent of the potential risk for companies, in April 2016 FCC announced approximately $51 million in proposed fines against one Lifeline provider, due to, among other things, its sales agents purposely enrolling tens of thousands of ineligible and duplicate subscribers in Lifeline using shared or improper eligibility documentation. To test internal controls over employees associated with Lifeline for our May 2017 report, we sought employment with a company that enrolls individuals to Lifeline. We were hired by a company and were allowed to enroll individuals in Lifeline without ever meeting any company representatives, conducting an employment interview, or completing a background check. After we were hired, we completed two fictitious Lifeline applications as an employee of the company, successfully enrolled both of these fictitious subscribers into Lifeline using fabricated eligibility documentation, and received compensation for these enrollments. The results of these tests are illustrative and cannot be generalized to any other Lifeline provider. We plan to refer this company for appropriate action as warranted. As stated above, these challenges might be overcome if FCC establishes a National Verifier, as it plans to do nationwide by the end of 2019, to remove responsibility for verifying eligibility from the providers. In addition, in May 2017, we made two recommendations to help address control weaknesses and related program-integrity risks. Specifically, we recommended that FCC establish time frames to evaluate compliance plans and develop instructions with criteria for FCC reviewers how to evaluate these plans to meet Lifeline’s program goals. We also recommended that FCC develop an enforcement strategy that details what violations lead to penalties and apply this as consistently as possible to all Lifeline providers to ensure consistent enforcement of program violations. FCC generally agreed with these recommendations. In conclusion, Lifeline’s large and diffuse administrative structure creates a complex internal control environment susceptible to significant risk of fraud, waste, and abuse. FCC’s and USAC’s limited oversight of important aspects of program operations further complicates the control environment—heightening program risk. We are encouraged by FCC’s recent steps to address weaknesses we identified, such as the 2016 order establishing a National Verifier, which, if implemented as planned, could further help to address weaknesses in the eligibility-determination process. We also plan to monitor the implementation status of the recommendations we made in May 2017. Chairman Thune, Ranking Member Nelson, and members of the Committee, this concludes my prepared remarks. I would be happy to answer any questions that you may have at this time. GAO Contact and Staff Acknowledgments For further information regarding this testimony, please contact Seto J. Bagdoyan at (202) 512-6722 or bagdoyans@gao.gov. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony are Dave Bruno (Assistant Director), Scott Clayton (Analyst-in-Charge), and Daniel Silva. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Created in the mid-1980s, FCC's Lifeline program provides discounts to eligible low-income households for home or wireless telephone and, as of December 2016, broadband service. Lifeline reimburses telephone companies that offer discounts through the USF, which in turn is generally supported by consumers by means of a fee charged on their telephone bills. This testimony is based on GAO's May 2017 report and discusses steps FCC has taken to measure Lifeline's performance in meeting goals; steps FCC and USAC have taken to enhance controls over finances, subscribers, and providers; and any weaknesses that might remain. For the May 2017 report, GAO analyzed documents and interviewed officials from FCC and USAC. GAO also analyzed subscriber data from 2014 and performed undercover tests to identify potential improper payment vulnerabilities. The results of this analysis and testing are illustrative, not generalizable. What GAO Found In its May 2017 report, GAO found the Federal Communications Commission (FCC) has not evaluated the Lifeline program's (Lifeline) performance in meeting its goals of increasing telephone and broadband subscribership among low-income households by providing financial support, but it has recently taken steps to begin to do so. FCC does not know how many of the 12.3 million households receiving Lifeline as of December 2016 also have non-Lifeline phone service, or whether participants are using Lifeline as a secondary phone service. FCC revamped Lifeline in March 2016 to focus on broadband adoption; however, broadband adoption rates have steadily increased for the low-income population absent a Lifeline subsidy for broadband. Without an evaluation, which GAO recommended in March 2015, FCC is limited in its ability to demonstrate whether Lifeline is efficiently and effectively meeting its program goals. In a March 2016 Order, FCC announced plans for an independent third party to evaluate Lifeline design, function, and administration by December 2020. FCC and the Universal Service Administrative Company (USAC)—the not-for-profit organization that administers the Lifeline program—have taken some steps to enhance controls over finances and subscriber enrollment. For example, FCC and USAC established some financial and management controls regarding billing, collection, and disbursement of funds for Lifeline. To enhance the program's ability to detect and prevent ineligible subscribers from enrolling, FCC oversaw completion in 2014 of an enrollment database and, in June 2015, FCC adopted a rule requiring Lifeline providers to retain eligibility documentation used to qualify consumers for Lifeline support to improve the auditability and enforcement of FCC rules. Nevertheless, in its May 2017 report, GAO found weaknesses in several areas. For example, Lifeline's structure relies on over 2,000 Eligible Telecommunication Carriers that are Lifeline providers to implement key program functions, such as verifying subscriber eligibility. This complex internal control environment is susceptible to risk of fraud, waste, and abuse as companies may have financial incentives to enroll as many customers as possible. On the basis of its matching of subscriber to benefit data, GAO was unable to confirm whether about 1.2 million individuals of the 3.5 million it reviewed, or 36 percent, participated in a qualifying benefit program, such as Medicaid, as stated on their Lifeline enrollment application. FCC's 2016 Order calls for the creation of a third-party national eligibility verifier by the end of 2019 to determine subscriber eligibility. Further, FCC maintains the Universal Service Fund (USF)—with net assets of $9 billion, as of September 2016—outside the Department of the Treasury in a private bank account. In 2005, GAO recommended that FCC reconsider this arrangement given that the USF consists of federal funds. In addition to addressing any risks associated with having the funds outside the Treasury, FCC identified potential benefits of moving the funds. For example, by having the funds in the Treasury, USAC would have better tools for fiscal management of the funds. In March 2017, FCC developed a preliminary plan to move the USF to the Treasury. Until FCC finalizes and implements its plan and actually moves the USF funds, the risks that FCC identified will persist and the benefits of having the funds in the Treasury will not be realized. What GAO Recommends In its May 2017 report, GAO made seven recommendations, including that FCC ensure plans to transfer the USF from the private bank to the Treasury are finalized and implemented expeditiously. FCC generally agreed with all the recommendations.
gao_GAO-18-367T
gao_GAO-18-367T_0
Background Under the Rail Safety Improvement Act of 2008, a PTC system must be designed to prevent train-to-train collisions, derailments due to excessive speed, incursions into work zone limits, and the movement of a train through a switch left in the wrong position. Railroads may implement any PTC system that meets these requirements, and the majority of the 29 commuter railroads are implementing one of three primary types of systems: the Interoperable Electronic Train Management System (I- ETMS), the Advanced Civil Speed Enforcement System, or Enhanced Automated Train Control (E-ATC). PTC’s intended safety benefits can only be achieved when all required hardware has been installed and tested, and a train is able to communicate continually and in real time with the software and equipment of its own railroad and also with that of other railroads operating on the same tracks. Real-time communication is needed to account for changing track conditions, which may, for example, include temporary speed restrictions where railroad employees are conducting track maintenance. Figure 1 illustrates how one system is intended to operate. PTC’s multi-step implementation process can be grouped into three primary phases (see fig.2). Each phase involves key activities for railroads to complete—such as installing PTC equipment—as well as the submission of key documents for FRA review and approval—such as test plans. Based on railroad data reported to FRA, most commuter railroads are currently in the second phase, which involves system design, installation, and testing. According to a recent FRA presentation, completing key activities within this phase is the near-term focus for many commuter railroads. According to FRA officials, railroads must complete certain implementation steps sequentially, while other activities can be worked on simultaneously; for example, railroads may work to finish installing locomotive and wayside equipment while also beginning testing on an initial track segment. Furthermore, based on railroads’ PTC implementation plans, the scale of implementation activities can vary by railroad, based on the size of the railroad and the number of components to be installed. For example, one relatively large commuter railroad must install computer hardware on 528 locomotives and 789 wayside units along 218 route miles, while one relatively small commuter railroad’s installation is limited to 17 locomotives and 35 wayside units along 32 route miles. According to FRA, full implementation of PTC is achieved when a railroad’s system is FRA-certified and interoperable, and all hardware, software, and other components have been fully installed and in operation on all route miles required to use PTC. The PTC system is required to be interoperable, meaning the locomotives of any host railroad and tenant railroad operating on the same track segment will communicate with and respond to the PTC system, including uninterrupted movements over property boundaries. In early 2016, railroads required to install PTC had to submit revised implementation plans to FRA that included a schedule and milestones for specific activities, such as installing locomotive and wayside hardware, acquiring radio spectrum (if necessary), and training employees who will have to use and operate PTC systems. Railroads are required to report annually to FRA certain information on their implementation progress. As part of overseeing railroads’ PTC implementation, FRA established a PTC Task Force in May 2015 to track and monitor individual railroads’ progress. Railroads are also required to report quarterly to FRA on the status of PTC implementation in several areas such as: locomotives equipped, employees trained, territories where revenue service demonstration (RSD) has been initiated, and route miles in PTC operation. FRA’s oversight tools include assessing civil penalties if a railroad fails to comply with legal requirements, including a railroad’s failure to comply with its implementation plan. FRA has a national PTC director, designated PTC specialists in the 8 FRA regions, and a few additional engineers and test monitors responsible for overseeing technical and engineering aspects of implementation and reviewing railroad submissions of documents and test requests. FRA officials told us they conduct various types of PTC-related work simultaneously, such as providing technical assistance to railroads, addressing questions, and reviewing documentation submitted by railroads. As railroads progress with testing and before completing implementation, FRA must review and approve a safety plan for each railroad and certify the PTC system. Commuter railroads that will not be able to implement a PTC system by December 31, 2018, may receive a maximum 2-year extension if they meet six criteria set forth in statute. Specifically, commuter railroads must demonstrate, to the satisfaction of the Secretary of Transportation, that they have: (1) installed all PTC system hardware; (2) acquired all necessary spectrum; (3) completed required employee training; (4) included in a revised implementation plan an alternative schedule and sequence for implementing their PTC system as soon as practicable; (5) certified to FRA that they will be in full compliance with PTC requirements by the date provided in the alternative schedule and sequence; and (6) either initiated RSD on at least one territory required to have operations governed by a PTC system or “met any other criteria established by the Secretary.” Progress Reported in Some Implementation Areas, but Significant Work Remains Most of the 29 commuter railroads have reported progress in some of the key areas of PTC implementation that FRA monitors, such as locomotive and wayside equipment installation, but the amount of progress reported varies across individual railroads (see fig. 3 below). Over half of the commuter railroads reported that they have made substantial progress in some initial implementation activities, while other railroads reported that they have made much more limited progress or have yet to begin equipment installation or employee training. For example, as of the end of September 2017: Locomotive Equipment Installation: 18 commuter railroads reported 50 percent or more of their locomotive PTC equipment was installed, and of these, 13 had completed installation. In contrast, 6 railroads reported that they had not started installation of locomotive equipment. Wayside Equipment Installation: 16 commuter railroads reported 50 percent or more of their wayside PTC equipment was installed, and half of them reported that they had completed installation. In contrast, 7 reported that less than 20 percent of this equipment was installed. Employee Training: 11 commuter railroads reported completing PTC training for 50 percent or more of their employees requiring training. Of these, four reported that they had completed employee training. Thirteen commuter railroads had completed 10 percent or less of their employee training, and of these, 11 reported that they had not started training their employees. However, some commuter railroad representatives we spoke with stated that they are waiting to conduct training until their PTC system is closer to deployment. For example, representatives from one railroad told us they are waiting to conduct training so employees will be recently trained and familiar with PTC as the system is rolled out. Notably, commuter railroads reported that they have made the most progress in obtaining spectrum, which allows PTC components to transmit information about a train’s movements and location. Specifically, 15 of the 17 railroads that require spectrum reported that they have obtained it. The two other railroads reported that they are in discussions to obtain leased spectrum. Beyond the initial implementation activities, much work remains for the majority of commuter railroads to complete other key PTC activities that will enable them to complete implementation. PTC implementation requires many additional steps to integrate equipment and software systems that go beyond installing equipment and training employees, and the majority of commuter railroads reported that they continue to work to complete these steps, which are technically complex and time consuming. For example, as of the end of September 2017: Locomotives Fully Equipped and PTC-Operable: Fifteen commuter railroads reported that half or more of their locomotives were fully equipped and PTC-operable, meaning that all necessary onboard hardware and software is installed and commissioned, and is capable of operating over a PTC-equipped territory. Eight commuter railroads reported that none of their locomotives were fully equipped and operable. Field Testing: Thirteen railroads reported that they had begun field testing—a key implementation milestone that precedes RSD and allows railroads to assess how PTC components and software function together. FRA officials said that the testing phase can be a long and difficult process, as data obtained during field testing must prove the functionality of the system and be included as part of a railroad’s application to enter RSD. RSD: Following successful field testing, FRA may grant a railroad approval to enter the next level of testing, RSD. In RSD, testing is performed on trains operating PTC as part of regular operations. According to FRA, RSD is the final phase of testing that a railroad completes in order to validate and verify its PTC system, and the results from RSD, along with earlier testing, are to be included in the safety plan a railroad submits to FRA. While six commuter railroads reported that they have begun RSD, most had not yet reached this key milestone—including some of the largest commuter railroads. Conditional Certification: Once FRA approves a railroad’s safety plan, the railroad receives a PTC system certification. According to FRA officials, as of September 30, 2017, only two commuter railroads were conditionally certified—meaning FRA has reviewed their safety plans and granted conditional approval for PTC operations, and the railroads are providing regular service in PTC operations—and two additional commuter railroads had submitted a safety plan for FRA review. Given the variation in commuter railroads’ progress, especially related to completing later-stage PTC activities such as testing and developing safety plans, 13 of 29 commuter railroads told us they planned to seek a deadline extension, and the remaining 16 told us they do not intend to seek an extension. However, the number of commuter railroads planning to seek an extension is subject to change before the end of 2018. Over Half of Commuter Railroads May Be at Risk of Not Meeting the 2018 Deadline or Criteria for RSD-based Extension, Though Numerous Factors Create Uncertainty Based on our analysis of the PTC schedules of the 29 commuter railroads, over half may not have sufficient time to complete activities needed to implement PTC by the end of 2018 or to qualify for an extension of that deadline by meeting criteria based on initiating RSD— for the purposes of this statement, referred to as an RSD-based extension. In particular, our analysis focused on the time likely needed for railroads to conduct RSD activities, because RSD is both the final step of field testing required by the 2018 deadline as well as one of the statutory options railroads have in seeking a deadline extension. For our analysis, we compared the amount of time railroads plan for completing two key milestones—installing the back office server and conducting field testing—to the amount of time FRA officials estimate is required for each milestone and to the experiences of railroads that have already completed RSD. However, it is important to recognize that numerous factors could affect railroads’ planned and future progress. For example, commuter railroads could face delays due to unexpected issues with PTC components or FRA reviews of documents submitted by the railroads. Over Half of Commuter Railroads May Be at Risk In May 2017, FRA sent letters to 14 commuter railroads and their respective state departments of transportation and governors informing the recipients that they had not installed at least 50 percent of their required locomotive and wayside equipment. In these letters FRA raised concerns that these railroads were at risk of not meeting the 2018 deadline and not completing requirements for a deadline extension. Subsequently, in January 2018, FRA applied a more stringent benchmark—whether a railroad had installed at least 65 percent of all equipment—and determined that 13 commuter railroads remained at risk. Using this more stringent criterion, only one railroad had made enough progress installing equipment to no longer be classified as at risk by FRA. In addition to FRA’s benchmarks for equipment installation, for our analysis we evaluated more broadly railroads’ progress in completing other implementation activities that follow equipment installation and that FRA and stakeholders said are more difficult to achieve. Specifically, we analyzed commuter railroads’ planned schedules for two key milestones to determine whether these railroads appear to have built sufficient time into their implementation plans to complete these and other activities by the 2018 deadline or to qualify for an RSD-based extension. The two key milestones we examined, both of which need to be completed before a railroad enters RSD, were: installing the back office server (BOS) and associated software necessary to connect and interface with wayside, locomotive, and dispatch equipment (the BOS transmits and receives data among this equipment that enables PTC to work); and conducting field testing, in particular testing of installed infrastructure and initial assessments of the PTC system’s overall functionality on trains that are not transporting passengers or operating during regular passenger service. Our analysis found that at least one quarter, and potentially up to approximately two thirds, of commuter railroads may not have sufficient time to enter RSD and, thus, may not meet the 2018 PTC implementation deadline or qualify for an RSD-based extension. These railroads vary by size and type of PTC system and by whether they plan to apply for a deadline extension. Specifically, our analysis found the following: Projection based on BOS status: Between 9 and 19 commuter railroads appear to be at potential risk of not meeting the 2018 deadline or qualifying for an RSD-based extension based on our analysis. Our analysis found that the 6 commuter railroads already in RSD took an average of 10 months from installing the BOS to starting RSD. However, the schedules of 9 railroads indicate that they plan to install a BOS less than 10 months before the 2018 deadline. We believe that given past experience of other railroads, this places these 9 railroads at potential risk. Moreover, FRA officials estimate that it can take 2 to 3 years for a railroad to install and prepare the BOS and associated software to support testing and RSD. Using FRA’s 2-year installation estimate (which would require BOS installation before January 1, 2017) further exacerbates the potential risk of not meeting the deadline or of not qualifying for any RSD-based extension for up to 19 railroads. Projection based on time allowed to conduct field testing: Based on our review of the planned schedules, between 7 and 14 railroads may not have built sufficient time into their plans either to complete field testing ahead of the 2018 deadline or to qualify for an RSD-based extension. Commuter railroads and FRA officials told us that field testing is challenging and can take a substantial amount of time due to, for example, unanticipated issues and limited available track for testing given regular passenger operations. On average, our analysis found that the 6 commuter railroads already in RSD took 7 months to move from starting field testing to starting RSD. However, 7 commuter railroads plan to start their field testing less than 7 months before the 2018 deadline. This situation raises concerns about their ability to conduct field testing before the 2018 deadline. Moreover, FRA officials told us that moving from the start of field testing to the start of RSD can take between 1 and 3 years, averaging about 2 years, and that most railroads under-estimate the amount of time needed for testing. When we applied the lower end of FRA’s estimate, we found that it further increases the potential risk for 14 railroads that plan to start field testing less than a year prior to the 2018 deadline. As a result, they could be at risk of not meeting the 2018 deadline or qualifying for an RSD-based extension. We used RSD as a benchmark for our analysis of key milestones based on the importance of this benchmark in implementing PTC and on the three RSD-based alternative criteria that FRA has approved to date. While the three approved alternative criteria all include RSD, FRA has broad authority to approve “any other” alternative criteria even if not based on RSD, as noted above. One FRA official told us the agency approved these three alternative criteria requests because they were all based on specific, quantifiable measures, rather than because they included RSD in particular. FRA officials stated that they have not issued guidance on uniform alternative criteria because they will strive for railroads to meet the criteria for a deadline extension that are listed in statute and want the discretion to make determinations on a case-by-case basis. In addition, FRA officials said they want to ensure that each railroad’s criteria are consistent with the statutory requirements for final implementation by December 31, 2020. Because it is unknown what alternative criteria FRA may establish in the coming months, which may not include RSD, it is difficult to determine at this time whether the railroads we found to be potentially at risk of not qualifying for an RSD- based extension might be more or less likely to qualify for an extension based on other, non-RSD criteria. Many Factors May Affect Commuter Railroads’ Ability to Meet the Deadline or Qualify for an Extension Much uncertainty exists regarding railroads’ ultimate implementation progress and their ability to meet the 2018 deadline or qualify for an extension. This uncertainty is due, in part, to the fact that PTC is a new way of operating and involves technologies that are more complex to implement than many other railroad capital projects. Furthermore, a number of factors can affect commuter railroads’ planned and future progress, including unexpected setbacks installing PTC components and resources and capacity issues. Below we highlight some of the factors that that could affect implementation progress. Limited Industry Expertise and Resources Three out of five PTC contractors and suppliers and about half of the commuter railroads we spoke with acknowledged that industrywide, there are a limited number of individuals with PTC technical expertise available to successfully implement the technology. This can affect the ability of railroads and contractors to meet planned schedules. For example, one large commuter railroad said it took a year and a half to hire an internal expert to continue work on its PTC project. In addition, five commuter railroads told us that they faced other issues with their prime contractors missing their milestones; such issues, going forward, could impact railroads’ progress during the coming year. Also, though most railroads we spoke to are relying on contractors, some commuter railroads may lack the in-house resources and expertise to plan and oversee a project as large and complex as PTC. Representatives from three commuter railroads we interviewed noted that PTC is not a traditional capital or construction project for a railroad; therefore, it requires additional expertise. FRA officials also stated that small commuter railroads may not have technical capacity or expertise with large contracts for such complex projects, especially given limited industry resources. In addition to limited expertise and resources, some commuter railroads told us they faced unexpected delays in obtaining PTC equipment, such as radios, from the supplier. Some PTC equipment is only available from a single provider, which can lead to delays executing contracts and obtaining equipment. Three commuter railroads we spoke with said they encountered issues executing contracts for PTC radios, in particular negotiating unique liability requirements sought by the only supplier of this equipment, which resulted in delays or higher overall costs to the railroads. One railroad noted that executing sole-source contracts for such circumstances is particularly problematic for state and public agencies. Interoperability and Host and Tenant Coordination As noted above, PTC is being implemented by different types of railroads using different systems, and achieving interoperability among PTC systems can complicate implementation. For example, Northeast Corridor railroads that are implementing versions of the Advanced Civil Speed Enforcement System need interoperability with freight railroads using I- ETMS. Even railroads that are installing the same PTC system have to take significant steps to ensure that systems will communicate and interoperate properly. In one case, a railroad told us that it is equipping its locomotives with equipment for multiple PTC systems to ensure that it can operate on various host railroads’ tracks. Some commuter railroads that only operate as tenants on other railroads’ tracks may be able to complete some PTC implementation work more quickly, as these railroads may benefit from work the host railroads already completed as they coordinate to implement PTC. For example, representatives from one commuter railroad we spoke with said they have to acquire and install PTC equipment on their locomotives but rely on the host railroads to install the remainder of the necessary PTC infrastructure. These tenant-only commuter railroads, however, have to coordinate field testing and RSD with the host railroads. Schedule Changes Unexpected issues with components or technology can also require additional time to complete certain activities, causing schedules to slip. Such issues could affect railroads currently on schedule as well as railroads pursuing aggressive schedules in an effort to overcome late starts or early setbacks. For example, representatives from 10 railroads we spoke with said that installing the BOS and associated software, and ensuring it functions properly, can pose a challenge. One contractor told us that once the BOS is delivered to a railroad, a lot of testing work remains, and unexpected issues inevitably arise during testing, even if the BOS works according to all specifications. Representatives from one railroad said that despite strong organizational commitment to implementation and setting internal targets for progress, their PTC project schedule slipped many times over the course of implementation due to a variety of issues, including on-going software updates that caused delays while also straining the budget and burdening staff. Representatives from that commuter railroad also noted that equipping vehicles with PTC components took three times longer than originally expected (3 years instead of 1 year). However, some railroads are looking for ways to accelerate implementation. For example, representatives from one railroad said they made the difficult decision to cut some weekend passenger service to accelerate wayside equipment installation. Therefore, as representatives from one railroad articulated, given the schedule slippage experienced by railroads further along in implementation, railroads with aggressive schedules would have a limited ability to accommodate any additional delays. FRA’s Resources and Capacity As the 2018 deadline approaches and railroads progress with implementation activities, the amount of documentation railroads will submit to FRA for review and approval is likely to increase significantly. For example, FRA reported in summer 2017 that it had taken between 10 and 100 days to review each of the test requests it received from railroads. As the 2018 deadline approaches, FRA will have to review a considerable amount of additional test plans and procedures as well as applications to begin RSD. In addition, FRA will have to concurrently review any safety plans that are submitted by railroads reaching the certification phase. At the American Public Transportation Association’s (APTA) Commuter Railroad Summit in June 2017, FRA officials said that they expect each safety plan review—which involves all the regional specialists and some contract personnel—to take between 6 and 12 months to review. These plans are about 5,000 pages in length. FRA officials told us that reviewing all of the safety plans in a timely manner will be a challenge given staff resources. FRA has 12 technical staff dedicated to the review of railroads’ PTC documentation and monitoring of PTC testing. Representatives from 10 out of 19 commuter railroads we interviewed said they are concerned about FRA’s ability to review submitted documentation in a timely manner. Lessons Learned As railroads continue to progress with their projects and the industry becomes more experienced with PTC, railroads could benefit from lessons learned. For example, representatives from one railroad that is implementing I-ETMS, the system all large Class I freight railroads are implementing, told us that they anticipate being able to capitalize on lessons learned from freight railroads that have operated in RSD. By leveraging the freight railroads’ experiences, one commuter railroad hopes to address issues before testing, rather than during, and therefore move more quickly through the testing process. If commuter railroads are able to apply lessons learned from other railroads’ testing processes, then they may be able to accelerate their implementation efforts. Railroads may also accelerate implementation schedules as they become more adept at the overall testing process, which involves submitting test documents to FRA and scheduling multiple tests. This could potentially shorten the average time it takes a railroad to complete one or more of the key milestones analyzed. The two commuter railroads that have been conditionally certified told us they have met with other commuter railroads informally and have shared their project experiences as a way to facilitate information sharing. FRA Monitors Railroads’ Progress but Has Not Systematically Communicated with Them or Prioritized Efforts FRA Monitors Railroads’ Implementation Progress, Reviews Documents, and Shares PTC Information Since 2015, FRA has assumed additional roles and responsibilities— primarily through the PTC Task Force and regional PTC specialists—to monitor railroads’ implementation progress, review required documentation, and share information about implementation steps and activities. Monitoring and Document Review: In response to a recommendation in our September 2015 report, FRA began to identify and collect additional information from the railroads to enable it to effectively track and monitor railroads’ PTC progress. For example, in 2016, the PTC Task Force began collecting quarterly progress data and monitoring railroads’ annual reports to track progress in meeting the PTC implementation milestones set out in railroads’ implementation plans, such as locomotive equipment installed at the end of the year. As previously noted, the Task Force used this implementation progress data in May 2017 to identify 14 commuter railroads at risk of not meeting the 2018 deadline or requirements for an extension. FRA also monitors railroads’ PTC implementation through meetings with railroad and industry associations, visits to individual railroads, and reviewing and commenting on PTC documentation submissions, such as requests to begin field testing and RSD. FRA officials told us that they monitor railroads’ progress to determine how much commuter railroads understand about the implementation process and to trigger discussions between FRA and the railroads. Regional PTC specialists are responsible for reviewing and approving requests submitted by railroads preparing to test system functionality as well as individual testing procedures describing the specific equipment and movements involved in each test. In addition, FRA officials told us that assessing civil penalties and sending commuter railroads letters of concern are the primary enforcement mechanisms they have available to oversee PTC. Information Sharing: FRA officials said that they have primarily used informal assistance and participation in group meetings to convey information related to the implementation process and specific milestones necessary to meet the 2018 deadline or qualify for an extension. FRA officials acknowledged that they do not have the capacity to provide frequent one-on-one assistance to all railroads given their growing PTC workload and limited agency resources. As such, FRA officials explained that in order to reach a wide audience given the approaching deadline, their current focus is on presentations at industry group meetings (e.g., APTA’s Commuter Rail Summit) and specific PTC systems user-group meetings. FRA’s regional PTC specialists told us they also provide direction on technical aspects of PTC implementation and testing, primarily by discussing issues at individual and railroad-industry meetings and providing informal feedback on commuter railroads’ PTC documentation, such as testing requests. FRA Has Not Systematically Communicated Information to Help Railroads Prepare for the 2018 Deadline or to Qualify for Extensions While the majority of the railroad representatives we met with said FRA officials were consistently available to discuss issues that arise during day-to-day PTC implementation activities, the information conveyed by these officials has sometimes been inconsistent. In particular, FRA’s heavy reliance on informal assistance and participation in group meetings to convey information to commuter railroads has led, at least on some occasions, to different or inconsistent information being communicated in different meetings. For example, representatives from one PTC equipment supplier said that FRA has not consistently commented on different railroads’ test plans, and as a result, they have not been able to carry lessons learned on to other railroads’ plans. In addition, while FRA’s officials said their position has been consistent with the regulations stating that the host railroad must submit a safety plan to FRA, representatives from one railroad we met with said they had heard conflicting information from FRA. For example, these railroad representatives told us that FRA officials originally said commuter railroads that are only tenants on other railroads needed to submit their own safety plans but later stated at an industry association meeting that tenant railroads could be included in the host railroads’ plans. In addition, commuter railroads have expressed a need for additional clarification about the criteria for applying for an extension. FRA officials also told us that they have received a lot of questions from commuter railroads about the criteria for an extension related to RSD or other alternative criteria. As noted above, to date, FRA has approved alternative extension criteria for three railroads, and in each case, the criteria involved RSD testing on a shorter track segment. However, representatives from one contractor working with several commuter railroads said it is unclear what “alternative criteria” FRA will approve to receive an extension. In addition, representatives from one commuter railroad stated that any opportunity to clearly outline FRA’s interpretation of the PTC requirements, specifically the alternative extension criteria that could, for example, allow for a shorter test segment, would enable railroads to better position themselves to apply for an extension. Representatives from some commuter railroads we met with were likewise unclear about the agency’s approach to reviewing and granting extension requests. Representatives from three commuter railroads said clarification of FRA’s planned approach would be helpful as the deadline approaches. According to FRA officials, the statute does not set a deadline by which railroads have to apply for an extension, and FRA has not set a deadline or indicated the latest date by which a railroad should apply. Nonetheless, for railroads that do not comply with PTC deadlines, FRA officials said they could impose civil penalties for each day a railroad fails to implement a PTC system by the applicable statutory deadline, but the agency has yet to determine how it will handle railroads that do not meet the deadline or receive an extension. With less than a year remaining before the 2018 deadline, FRA officials stated that they anticipate their workload is likely to increase as railroads submit additional documentation to review and continue to progress with testing. More systematic communication that delineates FRA’s planned approach for the upcoming deadline and extension process may be critical for the agency to efficiently use its limited resources and convey consistent information to all the railroads. Standards for internal control in the federal government state that management should externally communicate the quality information necessary to achieve the entity’s objectives. These standards also note that management should select the appropriate form and method of communication, so that information is communicated widely and on a timely basis. As we have previously found, the particular form of the agency’s communication—for example, by oral presentation, written guidance, or formal regulation—will depend on multiple factors including the purpose and content of the specific communication and applicable legal requirements. Moreover, internal control standards indicate agencies should have standard processes in place to determine which form of communication is appropriate in each case. FRA officials told us that the agency could issue written guidance explaining how it has decided to apply its deadline extension authority and what type of information railroads will then need to submit to get an extension. However, FRA officials stated this written guidance would require time- consuming approval by the Office of Management and Budget under the Paperwork Reduction Act, and would make timely issuance of such guidance difficult. As noted, however, FRA may have the option to use less formal, less time-consuming methods of communicating key information about the extension process, such as webinars or conference calls, to communicate information more systematically. FRA officials acknowledged they are working to identify mechanisms such as these, but they have yet to do so. Absent systematic communication articulating the agency’s planned approach for the extension process, railroads may not have the information they need to effectively prepare for the deadline or seek an extension. FRA Has Made Limited Use of Implementation Progress to Prioritize Efforts and Mitigate Risks While FRA has taken steps to more closely monitor railroads’ implementation progress, the agency has not prioritized its efforts, including its allocation of resources, based on an assessment of risk. In its 2015 Railroad Accountability Plan, FRA stated that its PTC data collection and monitoring efforts would allow the agency to inform, among other things, its resource allocation and risk mitigation. While FRA has used its data to identify at-risk railroads, it has not used this information to prioritize how to allocate its resources or address risks. For example, as discussed earlier after reviewing railroads’ data on their progress in installing PTC equipment, FRA notified 14 commuter railroads of their at- risk status in May 2017. However, while FRA officials said that they hold regular meetings with many—but not all—of the at-risk railroads, 9 of these 14 commuter railroads said that the formal letter they received did not ultimately trigger any change in the type of interaction they have with FRA. More recently, in December 2017, the Secretary of Transportation notified all railroads required to implement PTC by letter of the expectation that all possible measures be taken to ensure implementation requirements are met by the 2018 deadline. However, these letters made no distinction between railroads—that is, the same letter was sent to railroads with conditionally certified PTC systems and to railroads that reported completing no training or installing no locomotive equipment to date—nor did the letters describe how FRA’s approach to working with the railroads would respond to their particular circumstances and risks. As noted above, FRA officials have stated that the agency does not have the resources to meet more frequently with or provide additional assistance to railroads. While the PTC Task Force helps monitor railroads’ progress, FRA still employs fewer than 12 individuals with the requisite PTC expertise and experience to review technical documents and help railroads implement PTC systems. In an environment with limited agency resources, targeting agency efforts to areas of the greatest risk or highest priority areas is one way to leverage existing resources. According to standards for internal control in the federal government, management should identify, analyze, and respond to risks. In addition, FRA’s Strategic Human Capital Plan states that developments including the rapid introduction of new technologies, such as PTC, demand that FRA continuously evaluate its programs and resources to adapt to changing demands. However, FRA has not fully leveraged the implementation progress data that railroads’ submit to the agency to identify and develop a risk-based approach to prioritize agency actions. At present, it is unclear whether the agency’s priorities are, for example, to help the largest commuter railroads meet the deadline or extension requirements, push those railroads that are very close to full implementation, or assist railroads that are in the earliest stages of their PTC project. For example, one regional PTC specialist we met with said that if he did not need to be reviewing documentation or observing railroads’ field testing, he could spend more time with at-risk railroads. By not effectively targeting actions to help mitigate risks posed by railroads most at risk of not meeting the PTC deadline or qualifying for an extension, FRA misses the opportunity to leverage its limited resources by providing direct assistance in the areas of greatest need. Conclusions Much progress has been made in implementing PTC by commuter railroads. Nevertheless, about half of commuter railroads plan to apply for an extension, and many of the railroads’ planned schedules raise questions about their ability to complete key implementation milestones and qualify for RSD-based extensions prior to the 2018 deadline. As the 2018 deadline rapidly approaches, the need for clear information that is systematically communicated to all railroads implementing PTC becomes even more critical. FRA cannot expect to provide information and guidance to railroads individually, and therefore, adopting a risk-based communication strategy could help it more efficiently share information in the coming year. Moreover, the information FRA collects on railroads’ progress has not been used to inform the agency’s resource allocation decisions. Using this information to better allocate resources could help position FRA to better meet its responsibility to monitor and oversee PTC implementation in the future. Recommendations for Executive Action We are making the following two recommendations to FRA: The Administrator of FRA should identify and adopt a method for systematically communicating information to railroads regarding the deadline extension criteria and process. (Recommendation 1) The Administrator of FRA should develop an approach to use the information gathered to prioritize the allocation of resources to address the greatest risk. (Recommendation 2) Agency Comments We provided a draft of this statement to DOT for review and comment. In its comments, reproduced in appendix II, the agency concurred with our recommendations. DOT also provided technical comments, which we incorporated as appropriate. Chairman Thune, Ranking Member Nelson, and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. GAO Contact and Staff Acknowledgments If you or your staff have any questions about this testimony, please contact Susan Fleming, Director, Physical Infrastructure team at (202) 512-2834 or flemings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Susan Zimmerman (Assistant Director), Sarah Arnett, Jim Geibel, Delwen Jones, Joanie Lofgren, SaraAnn Moessbauer, Malika Rice, Amy Suntoke, Maria Wallace, Eric Warren, and Crystal Wesco. Appendix I: Objectives, Scope, and Methodology This statement examines commuter railroads’ implementation of positive train control (PTC). Specifically, this report addresses: commuter railroads’ progress in implementing PTC; how many, if any, commuter railroads may be at risk of not meeting the mandated PTC deadline or certain extension criteria, and what factors may be affecting implementation progress; and the extent to which FRA’s management and oversight approach has helped ensure that commuter railroads either meet the deadline or qualify for an extension. To address these objectives, we reviewed the Rail Safety Improvement Act of 2008, the Positive Train Control Enforcement and Implementation Act of 2015, and applicable Federal Railroad Administration (FRA) regulations, reports, and guidance. Our review focused on the 29 railroads FRA officials identified as commuter railroads required to implement PTC. We also reviewed previous GAO work on PTC and applied Standards for Internal Control in the Federal Government to FRA’s role overseeing PTC implementation, including the principles that management should externally communicate the necessary quality information to achieve the entity’s objectives and that management should identify, analyze, and respond to risks. In addition, we interviewed representatives from 19 commuter railroads to further understand their implementation progress, factors that may be affecting progress, and the interviewees’ perspectives on FRA’s management and oversight of PTC implementation. We selected the 19 railroads to include the 14 railroads that according to FRA were identified in May 2017 as at risk of both not meeting the 2018 implementation deadline and not completing statutory requirements necessary to receive a deadline extension, as well as 5 other railroads that were further ahead with implementation and that varied in geographic location and size of rail system, among other factors. We met with relevant FRA officials involved in PTC monitoring, enforcement, and technical assistance including the PTC Staff Director, regional PTC specialists working in each of the FRA regions where commuter railroads selected for interviews operate, and members of the headquarters-based PTC Task Force. In addition, we met with FRA Office of Railroad Safety specialists and engineers, among others. We also interviewed representatives from all 7 of the Class I freight railroads (which are also required to implement PTC), 5 major PTC equipment suppliers and contractors identified by FRA, and representatives from 2 railroad industry associations—the Association of American Railroads and the American Public Transportation Association—to obtain their perspectives on commuter railroads’ implementation of PTC, factors affecting implementation progress, and FRA’s PTC management and oversight. To identify commuter railroads’ progress in implementing PTC, we reviewed railroads’ third quarter progress reports submitted to FRA for the period ending September 30, 2017. We reviewed the most recently available quarterly data outlining the 29 commuter railroads’ installation and implementation progress in selected areas as of September 30, 2017, including: locomotive equipment installed, wayside equipment installed, employee training, locomotives fully equipped and PTC- operable, spectrum obtained, the status of field testing, and revenue service initiated. As necessary, we also reviewed the narrative fields in the quarterly reports for additional context related to a given railroad’s implementation activities and the extent of progress made in specific implementation areas. We assessed the data in these reports by reviewing it for anomalies, outliers, or missing information, and reviewing supporting narratives to ensure they aligned with the reported data, among other things. Based on these steps, we determined that these data were sufficiently reliable for our purpose of describing railroads’ progress implementing PTC. We also reviewed other sources of information, such as PTC Implementation Plans, railroads’ 2016 annual progress reports, and interviews with railroad representatives. To assess progress on locomotive equipment installation and wayside equipment installation, we compared the quantities installed to the total quantities required for PTC implementation. Similarly, to assess progress on employee training, we compared the number of employees trained to the number of employees required to be trained for PTC implementation. To assess progress in fully equipping locomotives to be PTC-operable, we compared the quantity of locomotives that are fully equipped and PTC-operable to the quantity required for PTC implementation. To assess progress on obtaining spectrum, we reviewed the quarterly update on spectrum. We concluded that a railroad had obtained spectrum if, for one or more area or location, it reported that spectrum was either (1) acquired but not available for use or (2) acquired and available for use. We also reviewed the narrative, as appropriate. For some railroads, we concluded that spectrum was not applicable because they use a PTC system that does not require spectrum, or because their host railroad is responsible for obtaining spectrum. To assess progress on field testing, we reviewed the third quarter status on installation and track-segment progress. We concluded that a railroad initiated field testing if one or more of its segments were reported as (1) testing or (2) operational/complete. To determine which railroads initiated revenue service demonstration (RSD), we reviewed the cumulative territories where RSD had been initiated. If the railroad reported that one or more territories had initiated RSD, we concluded that RSD had been initiated. Finally, to determine which railroads anticipate completing implementation before the December 31, 2018 deadline and which plan to seek any RSD- based extension, we obtained information from all 29 commuter railroads to identify which railroads plan to implement PTC by the 2018 deadline and which plan to submit an alternative schedule (that is, a request for an extension) to implement PTC after the December 31, 2018 deadline. To identify commuter railroads at risk of meeting neither the PTC deadline nor any RSD-based extension criteria, we first reviewed data on railroads’ progress installing PTC locomotive and wayside equipment. We did this because FRA used such installation progress to identify 14 commuter railroads as being at risk and notified them via formal letter in May 2017. To confirm FRA’s identification of commuter railroads that would be at risk based on an updated benchmark for the third quarter of 2017—railroads with less than 65 percent of total hardware installed—we analyzed railroads’ reported locomotive and wayside equipment installation status as of September 30, 2017 to determine the percentage of total hardware installed for each commuter railroad. To build on this analysis, we collected information from all 29 commuter railroads on their actual and planned schedules for key implementation milestones. For the 19 commuter railroads we met with, we collected this information as part of our interviews, and for the remaining 10 commuter railroads, we collected this information by email using a standard data collection instrument. The key implementation milestones covered procuring a prime contractor for PTC implementation; applying for and entering field testing and RSD, which is the final phase of field testing; installing the back office server (BOS) and associated software; and completing PTC implementation. This schedule information was collected between September 2017 and January 2018. We compared the amount of time commuter railroads’ planned for completing two key milestones to the amount of time that FRA officials estimate is required for each milestone and to the experiences of railroads that already initiated RSD. The two milestones are as follows: Install the BOS and associated software necessary to connect and interface with wayside, locomotive, and dispatch equipment. Conduct field testing of installed infrastructure, which is an initial assessment of the PTC system’s overall functionality on trains that are not transporting passengers or operating during regular passenger service. We selected these two milestones because (1) each milestone follows equipment installation (which FRA had previously analyzed to assess commuter railroads PTC implementation progress); (2) a railroad must complete both to enter RSD; and (3) several interviewees, including PTC contractors and suppliers and FRA officials, said these activities are important project milestones that are complex and time consuming. We calculated the amount of time a commuter railroad planned for each milestone (with initiating RSD as the endpoint for each milestone), and compared that amount of time to two benchmarks: first, the anticipated length of time FRA officials said that the milestones have taken or may take, and second, the average amount of time (in months) that each milestone took the six commuter railroads that had started RSD as of September 2017. Since we used two benchmarks, we present a range of railroads that may not have sufficient time to complete these milestones and thus may be at risk of not meeting the 2018 deadline or qualifying for an RSD-based extension. Appendix II: Agency Comments Appendix II: Agency Comments This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Forty-one railroads including 29 commuter railroads are required by statute to implement PTC. Commuter railroads unable to implement a PTC system by December 31, 2018, may receive a maximum 2-year extension if they meet certain statutory criteria. GAO was asked to review commuter railroads' PTC implementation. Among other objectives, this statement discusses (1) commuter railroads that may not be positioned to meet the PTC deadline or to qualify for an extension, and factors affecting their progress, and (2) the extent to which FRA's management and oversight approach has helped ensure that commuter railroads meet the deadline or qualify for an extension. GAO analyzed commuter railroads' most recently available quarterly progress reports and collected information on planned implementation schedules, interviewed 19 commuter railroads—including 14 FRA identified as at-risk and 5 others further ahead with implementation—and interviewed FRA officials. What GAO Found The Federal Railroad Administration (FRA) is responsible for overseeing railroads' (including commuter railroads') implementation of positive train control (PTC) by December 31, 2018. PTC is a communications-based train control system designed to prevent certain types of accidents and involves the installation, integration, and testing of hardware and software components. For example, railroads must install equipment on locomotives and along the track, and complete field testing, including revenue service demonstration (RSD)—an advanced form of testing that occurs while trains operate in regular service. GAO's analysis of commuter railroads' PTC scheduled milestones for two key activities necessary to meet the 2018 deadline or qualify for an RSD-based extension (one of the statutory options) found that as many as two-thirds of the 29 commuter railroads may not have allocated sufficient time to complete these milestones. Specifically, in comparing the commuter railroads' schedules to FRA's estimates of the time required to complete these milestones and the experiences of railroads that have already completed them, GAO's analysis found that from 7 to 19 commuter railroads may not complete the milestones before the 2018 implementation deadline or qualify for an RSD-based extension. For example, FRA estimates that field testing (one of the milestones) takes at least one year, but GAO found that 14 commuter railroads plan to start this testing less than a year before the 2018 deadline, increasing the potential risk that this milestone will not be completed. However, FRA has the authority to establish alternative criteria for an extension not based on RSD, and several other factors can affect commuter railroads' planned and future progress. As a result, the number of commuter railroads at risk of not meeting the deadline or qualifying for an extension could increase or decrease in the coming year. FRA's PTC management and oversight includes monitoring commuter railroads' progress, reviewing documentation, and sharing information with them, but the agency has not systematically communicated information or used a risk-based approach to help these railroads prepare for the 2018 deadline or qualify for an extension. GAO found that FRA has primarily used informal assistance, meetings with individual railroads, and participation in industry-convened groups to share information with commuter railroads, and in some cases the information conveyed has been inconsistent according to industry representatives. Some commuter railroads also told GAO that clarification about the agency's planned process for reviewing and approving extension requests would be helpful. Federal internal control standards state that management should externally communicate the necessary quality information to achieve its objectives. While FRA officials have said they are working to identify additional ways to convey extension-related information, they have not yet done so. Moreover, although FRA receives information from commuter railroads on their progress in implementing PTC, it has not used this information to prioritize resources using a risk-based approach. With the year-end 2018 deadline approaching, and an anticipated significant increase in FRA's workload, targeting resources to the greatest risk can help better ensure that FRA effectively fulfills its oversight responsibilities and provides commuter railroads the information they need to prepare for the 2018 deadline or seek an extension. What GAO Recommends GAO recommends FRA identify and adopt a method for systematically communicating information to railroads and use a risk-based approach to prioritize its resources and workload. DOT concurred with the recommendations. The agency also provided technical comments, which were incorporated as appropriate.
gao_GAO-18-628
gao_GAO-18-628_0
Background Individuals who have a limited ability to care for themselves due to physical, cognitive, or mental disabilities or conditions may require a range of LTSS that include hands-on assistance with, or supervision of, daily tasks. Individuals with LTSS needs range from young children to older adults, and they have varying degrees of difficulty performing without assistance (1) activities of daily living (ADL), such as bathing, dressing, toileting, and eating, or (2) instrumental activities of daily living (IADL), such as preparing meals, housekeeping, using the telephone, and managing money; they may require full or partial assistance to complete some—or all—of the ADLs and IADLs. LTSS are generally provided in two settings: (1) institutional settings, such as nursing facilities and intermediate care facilities for individuals with intellectual disabilities; and (2) home and community settings, such as homes or assisted living facilities. LTSS provided in home- and community-based settings comprise a wide range of services and supports to help individuals remain in or return to their homes or communities. HCBS include personal care services to provide assistance with ADLs or IADLs, adult day care services, certain home modifications that allow beneficiaries to remain in their home, non-medical transportation, respite care for caregivers, and case management services to coordinate services and supports. Direct care workers— personal care aides, homemakers, companions, and others—provide the majority of the paid care for individuals with LTSS needs. Medicaid Coverage of HCBS Medicaid provides states with a number of options for providing HCBS, including through state plan benefits and through waivers and demonstrations. Since 1975, states have had the option to offer personal care services under their state Medicaid plan, which covers assistance with ADLs and IADLs, either at home or in another location. States also have the option to cover HCBS for Medicaid beneficiaries through waivers and demonstrations, under which states may, for example, provide services not otherwise covered by Medicaid to designated populations who may or may not otherwise be eligible for Medicaid services. States have the option to seek approval for waivers and demonstrations that allow them to target HCBS to specific populations or conditions, limit the availability of those services geographically, and limit the number of individuals served through the use of enrollment caps—actions that are generally not otherwise allowed under Medicaid, but may enable states to control costs. Table 1 below summarizes key characteristics of selected state plan and waiver authorities that states can use to provide HCBS. The 1915(c) waiver, named for the statutory provision authorizing it in the Social Security Act, is the primary means through which states provide HCBS coverage for Medicaid beneficiaries. Added as an option in 1981, these waivers account for the majority of Medicaid HCBS expenditures. Under 1915(c) waivers, states may cover a broad range of services for participants, as long as these services are required to prevent institutionalization. Therefore, to be eligible, individuals must demonstrate the need for an institutional level of care by meeting state eligibility requirements for services in an institutional setting, such as a nursing facility. Prior to 2014, states were required to have multiple 1915(c) waivers if they chose to target different populations—using, for example, one waiver for individuals with developmental disabilities and another for individuals with physical disabilities. However, beginning in March 2014, CMS permitted states to combine target groups within a single 1915(c) waiver as long as the services offered were the same for all groups. States’ 1915(c) waivers are required by federal law to be cost neutral; that is, states must show that the average Medicaid expenditures for the services provided under the waiver are equal to or less than what average expenditures would be if that same population were to be served in an institutional setting. States may apply cost neutrality in the aggregate across all waiver participants—meaning that some individuals can be more costly to serve in home- and community-based settings than in an institution—or individually, meaning that spending for each waiver participant can be no more than what it would cost to serve the individual in an institution. States also have the option to limit the number of beneficiaries served under a 1915(c) waiver by establishing a predefined enrollment cap. States with enrollment caps may establish a waiting list, and a nationwide survey of state Medicaid officials estimated that there were over 600,000 individuals on waiting lists for 1915(c) waiver services in 2015. The newest Medicaid option for covering HCBS—the Community First Choice state plan option under section 1915(k) of the Social Security Act—was established by the Patient Protection and Affordable Care Act in 2010. Under this option, states must provide personal care services to assist beneficiaries with ADLs and IADLs and services to support the acquisition of skills necessary for beneficiaries to accomplish these daily activities, among other things. The Community First Choice option also allows for the coverage of other services, such as the costs associated with moving a beneficiary from an institution to a home- or community- based setting. Like the 1915(c) waiver, this option is limited to individuals who meet the state’s institutional level-of-care criteria, but unlike the 1915(c) waiver, enrollment in a 1915(k) Community First Choice program cannot be capped. States that offer this benefit receive a 6 percentage point increase in their federal medical assistance percentage for services provided under this option. Medicaid Spending on LTSS Medicaid spending on LTSS is significant, representing about 30 percent of total Medicaid program spending in fiscal year 2016, and the percentage of LTSS spending used for HCBS has grown over time. CMS’s annual reports on LTSS expenditures have shown that national spending for HCBS as a percentage of LTSS spending surpassed the percentage spent on institutional care in fiscal year 2013 and has continued to grow, climbing to 53 percent in fiscal year 2014, 54 percent in 2015, and 57 percent in 2016. At the state level, 29 states spent more on HCBS than institutional care in fiscal year 2016, but the percentage of HCBS spending varied widely across states. (See fig. 1.) As states’ options for providing HCBS within Medicaid and spending on HCBS have grown, Congress has also authorized temporary programs aimed at increasing the provision of HCBS. Money Follows the Person was established by the Deficit Reduction Act of 2005 as a demonstration grant program to support states’ transition of eligible individuals who want to move from institutional settings back to the community. As of September 2016, CMS had awarded a total of about $3.7 billion in grant funding to 44 states. According to CMS, as of December 2016, funding from the program had been used to support the transition of more than 75,000 individuals back into the community. Authorization for the Money Follows the Person program expired at the end of fiscal year 2016, but states have through fiscal year 2018 to transition new beneficiaries and through fiscal year 2020 to spend any remaining grant funds. The Balancing Incentive Program was created by the Patient Protection and Affordable Care Act to help states rebalance their provision of LTSS toward greater use of HCBS. Under the program, states that spent under 25 percent of their LTSS expenditures on HCBS in fiscal year 2009 qualified for a 5 percentage point increase in their federal medical assistance percentage for state HCBS expenditures. States that spent between 25 and 50 percent were eligible for a 2 percentage point increase. In return, states agreed to increase the percentage of LTSS spending for HCBS to achieve a specific benchmark. Under the program, CMS provided $2.4 billion in enhanced federal matching payments over 4 years (October 2011 – September 2015) to 21 states. According to CMS, 15 of the 21 states met their balancing benchmark by September 2015, when the program ended. HCBS Delivery Systems States can choose among delivery systems, such as fee-for-service and MLTSS (i.e., managed care), to provide HCBS. Under fee-for-service, states pay providers directly and on a retrospective basis for each covered service they deliver. In contrast, in MLTSS, states contract with MCOs to provide a specific set of covered services to beneficiaries in return for one fixed periodic payment per beneficiary, typically per member per month. These payments are referred to as capitation payments. The use of MLTSS has increased over time; MLTSS spending rose from $10 billion in fiscal year 2012 to about $39 billion in 2016. According to a 2018 CMS report, 24 states had implemented 41 MLTSS programs as of August 2017, and there were about 1.8 million Medicaid beneficiaries enrolled in MLTSS programs. Selected States’ HCBS Program Structures Reflect Decisions about Populations to Cover, Whether to Limit Eligibility or Enrollment, and Managed Care Preferences The structure of the 26 HCBS programs we reviewed in selected states reflected decisions about which populations states wanted to cover, whether to limit eligibility for or enrollment in HCBS programs, and whether the state wanted to provide HCBS through managed care (i.e., MLTSS). In two states, settlements resulting from litigation also affected the structure of HCBS programs. Decisions about Which Populations to Cover Four of our five selected states—Florida, Mississippi, Montana, and Oregon—had multiple HCBS programs (21 in total) that targeted specific populations. The fifth state, Arizona, used one program to provide HCBS to individuals who are aged or disabled and those with intellectual or developmental disabilities. The remaining four programs were not targeted to specific populations. (See appendix I for a list of the HCBS programs and populations served in each of the selected states.) All four of Florida’s HCBS waiver programs targeted specific populations, such as individuals with intellectual or developmental disabilities and individuals with familial dysautonomia. Florida’s HCBS program for intellectually and developmentally disabled individuals included an individual budgeting model through which the beneficiaries and their guardians could choose which services they received and which providers would deliver the services. Such individual budgeting also allowed beneficiaries the flexibility to make adjustments in services and providers as their needs changed. All of Mississippi’s six HCBS programs provided services to targeted populations, including the aged or disabled and individuals with severe orthopedic and neurological impairment. Two of the programs were targeted to individuals with intellectual or developmental disabilities, including a state plan benefit that provided services that help beneficiaries develop daily living and social skills, as well as opportunities to participate in community activities, and promote an individual’s ability to obtain and maintain employment. Four of Montana’s six HCBS programs targeted specific populations, including those with severe disabling mental illness and children with autism. Officials from Montana told us that one of the reasons for implementing the program for children with autism was to provide early intensive treatment to lessen the degree of services needed later in life. In addition to its programs for specific populations, Montana also operated two programs that provided personal care services to a broader Medicaid population requiring assistance with ADLs and IADLs—the personal care state plan benefit and the Community First Choice program. Montana officials told us that one of the factors the state considered when implementing the Community First Choice program was the 6 percent enhanced federal match for this program; before implementing the program, Montana projected that the increase in federal funds would allow the state to serve an additional 150 beneficiaries per year. Oregon had nine different HCBS programs, seven of which targeted specific populations, including children with LTSS needs and different populations of individuals with intellectual or developmental disabilities. Like Montana, Oregon also had two personal care services programs that served all eligible Medicaid beneficiaries—a state plan benefit and a Community First Choice program. Oregon officials explained that they were also attracted to the Community First Choice option due to the enhanced federal match, as well as the opportunity to expand the array of services available. For example, in addition to providing personal care services, Oregon’s Community First Choice program also covers costs associated with transitioning beneficiaries from institutions to home- or community-based settings, such as the first month’s rent, utility deposits, bedding, and basic kitchen supplies. Decisions about Whether to Limit Eligibility or Enrollment All five of the selected states had at least 1 HCBS program that limited eligibility to individuals who require an institutional level of care. Specifically, 22 of the 26 HCBS programs we reviewed limited eligibility to this population. The remaining 4 programs—in Mississippi, Montana, and Oregon—were state plan HCBS or personal care services programs, which were operated under authorities that do not permit limiting enrollment to individuals with an institutional level-of-care need. Four of the selected states—Florida, Mississippi, Montana, and Oregon— had enrollment caps for 1 or more of their HCBS programs, namely all of the 19 HCBS programs operated under 1915(c) waivers. Some of the state officials we spoke with told us that they used historical data on utilization, cost-of-care per person, and the annual number of requests for enrollment, as well as information on available funding, when determining their enrollment caps. However, states can also obtain CMS approval to change their enrollment caps over time to respond to increased demand or to include additional populations. Oregon officials told us that the state has generally been able to increase the enrollment cap for the aged or disabled program as needed in order to meet demand. Montana officials told us that the enrollment cap for their HCBS program for individuals with intellectual or developmental disabilities—originally limited to children— was increased when the state decided to expand the program to serve adults. The four selected states maintained waiting lists for 12 of the 19 HCBS programs that limited enrollment through enrollment caps. However, because states differed on whether they determined eligibility before adding individuals to the waiting list, information on the number of individuals on these waiting lists is not comparable across states. For example, Florida did not screen for eligibility prior to placing individuals on the waiting list of its aged or disabled waiver, which totaled over 48,000 individuals as of December 2017. By contrast, individuals on Montana’s much smaller aged or disabled waiting list were pre-screened for eligibility. In addition, states varied on whether and how they set priorities for enrollment in the waiver for individuals on the waiting list. For example, the Montana aged or disabled waiver set priorities for an individual’s enrollment according to various state criteria, including risk of institutionalization, and an assessment of informal supports. By contrast, in Mississippi, individuals on the intellectual or developmental disabilities waiting list generally gained enrollment into the waiver in order of their date of eligibility. Decisions about Whether to Use MLTSS Two of the selected states we reviewed—Arizona and Florida—used MLTSS for one HCBS program. Officials from these states told us the ability to use managed care contracts to (1) set incentives aimed at transitioning individuals from institutions to home- and community-based settings and (2) increase oversight of providers were important factors in choosing MLTSS to provide HCBS. Setting incentives for transitions. State officials told us that they used contract incentives to shift services from nursing facilities to community-based care in their MLTSS programs. Specifically, Arizona and Florida used blended capitation rates, meaning that the rate or amount the states pay MCOs to cover expected costs for each LTSS beneficiary is the same for all beneficiaries regardless of whether they are in a nursing home or in a home- and community- based setting. Because HCBS is generally less expensive than LTSS delivered in institutional settings, blended rates can create a financial advantage for the MCO to serve as many beneficiaries as possible in home- and community-based settings. Three of the MCOs we spoke with provided examples of how they have responded to these incentives to provide HCBS. For example, an official from one MCO told us that the MCO had created new positions for “transition clinicians,” registered nurses who use their medical knowledge to systematically evaluate beneficiaries in an institution to determine if they may be a candidate for transition to a community-based setting. The official explained that after the transition clinician identifies a potential candidate, the clinician will evaluate other factors, including the candidate’s current housing options and level of familial support, in order to ensure that necessary resources are in place when the beneficiary leaves the institution. In addition, the official said they facilitated transitions by providing beneficiaries leaving nursing facilities with a one-time $2,500 transition allowance that can be used for expenses such as security or utility deposits, furniture, or new resident fees at an assisted living facility. Oversight of MCOs. According to officials from Arizona and Florida, the states chose to use MLTSS because it afforded better oversight of providers and had the potential to improve patient outcomes. Specifically, officials said that managing a limited number of MCOs, who in turn have contracts with HCBS providers, allows for better oversight and outcomes, and has led to service delivery improvements, compared to paying providers on a fee-for-service basis. For example, Florida officials explained that they recently consolidated three smaller fee-for-service programs into their MLTSS program. Prior to that consolidation, the three fee-for-service programs provided HCBS to approximately 7,500 individuals with AIDS, traumatic brain injury/spinal cord injury, and individuals with cystic fibrosis. Officials said that they did not believe providers in these smaller fee-for-service programs were providing good care, based on service utilization analyses that showed some beneficiaries were not accessing any services beyond one case management service per month. Furthermore, the officials also told us that it was harder to assess quality of care in the fee-for-service programs compared to MLTSS. Officials said that now that these beneficiaries receive care under the MLTSS waiver, there is more accountability and improved quality of care. Representatives from aging and developmental disability professional groups we interviewed said that states may also choose to implement MLTSS programs to achieve greater budget predictability and control costs. CMS’s recent report on the growth of MLTSS also notes states’ desire for improvements in quality of care and outcomes; increased access to HCBS providers; and better care coordination, among other factors. We have previously reported that although MLTSS can provide states with the opportunity to enhance and encourage the provision of HCBS, oversight at the state and federal levels is critical to ensure that individuals with LTSS needs are able to obtain needed care in a timely fashion. In addition, our prior work on MLTSS payment rates found that five states—including Arizona and Florida—set clear financial incentives in their MCO payment rates for greater use of community-based care, while one state’s rate structure included higher payments for beneficiaries receiving institutional care. This state’s rate structure could have created an incentive for MCOs to move higher-cost beneficiaries from the community to an institution. Additionally, we found that most of the states reviewed for that prior work were not specifically linking payments with MLTSS program goals such as beneficiary outcomes and that federal oversight of states’ MLTSS payment structures was limited. We made several recommendations to improve CMS’s oversight of states’ payment structures for MLTSS. CMS agreed with our recommendations and reported actions it planned to take to address them. Officials from the three selected states that do not use MLTSS cited various reasons for this, such as stakeholder opposition and state law restrictions on enrolling individuals receiving LTSS in managed care. For example, officials in Oregon explained that stakeholders objected to the profit motive they assumed an MCO would have, which the stakeholders believed would compromise quality of care and reduce beneficiaries’ choice of providers. Officials in Montana said that because the state was rural and had relatively few Medicaid beneficiaries, MLTSS would not be cost effective. The Effects of Litigation on the Structure of HCBS Programs Officials from two of the selected states—Oregon and Mississippi—told us that settlements resulting from litigation have shaped the structure of their HCBS programs for certain populations. Oregon officials explained that a legal settlement in 2001 resulted in the creation of an additional HCBS program for individuals with intellectual or developmental disabilities and the elimination of an HCBS waiting list for this population. In Mississippi, officials explained that as a result of a legal settlement in 2005, the state increased enrollment in certain HCBS programs. As a result of the settlement, officials said that state case managers contacted all 1,900 individuals who resided in institutions at the time to determine their interest in living in a home- and community-based setting. Those who expressed interest were evaluated to determine if they could live outside an institution and whether adequate familial or other support was available. Based on this information, and as a result of additional funding from the state legislature as a result of the lawsuit, the state was able to add new beneficiaries to several of its HCBS programs. Selected States Described Challenges Providing HCBS, Such As Workforce Issues, and Steps Taken to Respond to These Challenges Officials from the five selected states and MCOs we interviewed described challenges with providing HCBS, including workforce issues, such as recruiting and retaining direct care workers; serving beneficiaries with complex medical and behavioral health needs; and other challenges. The officials also reported taking steps to respond to these challenges. HCBS Workforce Challenges Officials from all five selected states and three of the four MCOs we interviewed described workforce challenges, such as recruiting and retaining direct care workers and ensuring the availability of HCBS providers in rural and remote areas. For example, officials from Montana and Oregon noted that the low wages paid to direct care workers, who provide hands-on care and assistance with ADLs and IADLs, contribute to workforce shortages. According to the officials, direct care workers can typically earn more by working at a fast food restaurant. Officials from Montana and Mississippi and officials from three of the MCOs said the workforce shortages are often worse in rural or remote areas, where travel across long distances is common. For example, the state officials said that it can be hard to find a provider willing to drive a long distance each way to work for only a few hours. To respond to these workforce issues, officials from Montana and Mississippi and two MCOs reported offering higher payment rates to providers. In 2017, the Montana legislature approved special funding to raise the hourly wage for direct care workers providing care in certain Medicaid HCBS programs in state fiscal year 2019. Officials from Mississippi said that based on a study of provider reimbursement rates in one of their HCBS waiver programs, the state raised payment rates for agencies that employ direct care workers and other providers in 2017. Officials said they hoped the increase would create an incentive to recruit and develop providers in more rural areas. Officials from Arizona and Montana and one MCO also mentioned that Medicaid’s participant- directed options—which allow beneficiaries to draw paid caregivers from among their family members, friends, and neighbors—had helped to address HCBS workforce shortages. Arizona officials said that roughly half of beneficiaries in its HCBS program who were receiving personal care services got their care from family members, including spouses and parents of adult children living in the home. Serving HCBS Beneficiaries with Complex Needs Officials from four of the five selected states and all four MCOs we spoke with said they faced challenges providing HCBS for beneficiaries with complex medical or behavioral health needs. Officials we interviewed said that complex medical conditions can be hard to accommodate in home- and community-based settings. For example, officials from Mississippi and one MCO mentioned difficulties finding appropriate placements for individuals requiring ventilator services. State and MCO officials also reported that complex conditions that affect beneficiaries’ behavior, such as co-occurring developmental disabilities and behavioral health conditions, dementia, and traumatic brain injury can also create challenges for providing HCBS, particularly when beneficiaries display aggressive or other challenging behaviors. Officials from one MCO explained that these beneficiaries’ challenging behaviors can cause friction between beneficiaries and their providers and make it harder for beneficiaries to sustain good relationships with providers. Officials from the selected states and MCOs we interviewed said that they have responded to the challenge of serving HCBS beneficiaries with complex medical or behavioral health needs by (1) supporting the development of locations in the community to serve individuals with specific complex needs, (2) training providers, and (3) increasing care coordination. Officials from one MCO said that they worked with nurses in the community to support the development of adult foster homes as an alternative to institutional care for beneficiaries who require ventilator services. Similarly, Montana officials said they had reached out to community partners, such as assisted living facility owners, to educate them on what Medicaid can and cannot pay for in order to aid them in developing multiple funding streams for specialized programs for individuals with traumatic brain injury. Montana officials and officials from an MCO said they had offered behavioral health training for providers; Montana offered a mental health first aid class for providers, and MCO officials reported sending behavioral health specialists into assisted living facilities to help train staff on handling challenging behaviors in an effort to avoid beneficiaries being moved out of the assisted living facility and into an institutional setting. Regarding care coordination, Arizona officials reported that the state is planning to offer beneficiaries with intellectual or developmental disabilities the choice of a model of care that integrates medical care, behavioral health care, and certain LTSS, under a single, comprehensive managed care contract beginning in October 2019. Officials from one MCO said this model of care will help better identify needs and coordinate care, for example, for children with autism and a co-occurring behavioral health condition. Limited Funding for HCBS Programs Officials from four selected states and officials from one of the MCOs in the fifth state told us that limits on funding for HCBS programs were a challenge, particularly in the context of the growing number of individuals with LTSS needs. Officials from Mississippi said that lack of funding from the state legislature had affected the enrollment of beneficiaries in certain HCBS waivers. Specifically, officials said that the state was unable to enroll as many beneficiaries in certain waivers as were approved by CMS, and that only a limited number of beneficiaries had been added to these programs for the past 2 or 3 years. Officials from one MCO in Arizona said that state budget constraints had led to past reductions in the amount of certain HCBS, such as respite care. Oregon officials said that the state experienced budgetary pressures as a result of implementing its 1915(k) Community First Choice state plan program, namely, that the increase in federal funding the state received did not fully cover the increased cost of serving all eligible beneficiaries as required under this option. Florida officials said that the state has experienced rapid growth in the population with LTSS needs and that this growth, combined with medical advances that prolong life and reduce attrition from waiver programs, had contributed to a growing waiting list for HCBS. Officials who cited HCBS funding as a challenge said that they responded to these challenges by, among other things, providing information to their legislatures on the projected need for HCBS to inform future funding decisions. For example, Florida officials said that they educate the legislature about funding needs by conducting estimating conferences that produce information that is provided to the Governor and both legislative houses to use when deciding funding amounts. The information provided includes the growth in the population of frail elders, the projected demand for Medicaid, the cost of providing HCBS, and the cost avoidance achieved by keeping people out of nursing homes. State officials have also leveraged alternative funding sources—including federal grants—to help respond to funding limits for HCBS. Officials from Montana and Mississippi said that CMS’s Money Follows the Person grant program—which provided state Medicaid programs with funding for beneficiaries to transition out of institutions—had helped them to serve more individuals in home- and community-based settings. Montana officials noted that Money Follows the Person provided the state with extra help to transition beneficiaries who were the most difficult to serve and often had multiple co-occurring conditions from institutions to community-based settings. Mississippi’s Money Follows the Person program—Bridge to Independence—resulted in a total of 540 beneficiaries moving from institutions to home- and community-based settings, according to state officials. Mississippi officials also noted that they maximize HCBS waiver funding by leveraging other potential funding sources, such as charitable organizations, that could pay for items such as a wheelchair ramp for a beneficiary before waiver funds were expended. Other Challenges State and MCO officials also mentioned other challenges providing HCBS: Affordable housing. Officials from Mississippi and Montana and one MCO cited the lack of affordable housing as a barrier for beneficiaries wishing to transition out of an institution. The MCO officials we spoke with said their transitions team, which assists beneficiaries who are moving out of an institution into the community, includes a housing coordinator whose job it is to track available housing and help beneficiaries find housing they can afford. Limits on HCBS spending per beneficiary. Officials from one MCO said that the state’s limit on HCBS waiver spending per beneficiary— requiring that spending for HCBS does not exceed the cost of institutional care—was a challenge, particularly for beneficiaries with high needs. The officials indicated that the MCO tracks HCBS spending for each beneficiary and reviews plans of care when a beneficiary reached 80 percent and 95 percent of the spending limit. Beneficiaries whose spending exceeds 100 percent for more than a 6- month period can choose to move to an institutional setting, or to continue to receive more limited HCBS that do not exceed the cost of care in an institution. In cases where the MCO believed the beneficiary could not be safely served in the community at that level of spending, officials said that beneficiaries and their families were required to sign a form acknowledging the safety risks. Agency Comments HHS provided technical comments on a draft of this report, which we incorporated as appropriate. As discussed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days after its issuance date. At that time, we will send copies of this report to the Secretary of Health and Human Services and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions, please contact me at (202) 512-7114 or yocomc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix II. Appendix I: Home- and Community-Based Services Programs in Selected States Developmental Disabilities Individual Budgeting Waiver 1915(c) 1915(c) and 1915(j) Familial Dysautonomia Waiver 1915(c) Individuals diagnosed with familial dysautonomia 1915(c) Children under 21 years of age with degenerative spinocerebellar disease 1915(c) 1915(c) 1915(c) Intellectual Disabilities/ Developmental Disabilities Waiver 1915(c) Traumatic Brain Injury/Spinal Cord Injury Waiver 1915(c) Individuals with traumatic brain injury or spinal cord injury 1915(i) Individuals with intellectual or developmental disabilities 1915(c) Home and Community-Based Waiver for Individuals with Developmental Disabilities 1915(c) Children’s Autism Waiver 1915(c) Behavioral Health Severe and Disabling Mental Illness HCBS Waiver 1915(c) State Plan Personal Care Services 1905(a)(24) N/A 1915(k) 1915(c) State Authorizing statute 1915(c) Medically Involved Children’s Waiver 1915(c) Behavioral Intermediate Care Facility for Individuals with Intellectual Disabilities Model Waiver 1915(c) Intermediate Care Facility for Individuals with Intellectual Disabilities (ICF/IID) Comprehensive Waiver 1915(c) Intermediate Care Facility for Individuals with Intellectual Disabilities (ICF/IID) Support Services Waiver 1915(c) Individuals 18 years of age or older with intellectual or developmental disabilities 1915(i) State Plan Personal Care Services 1905(a)(24) N/A 1915(k) Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Michelle Rosenberg, Assistant Director; Hannah Locke, Analyst-in-Charge; Romonda McKinney Bumpus; Krister Friday; Vikki Porter; and Jennifer Whitworth made key contributions to this report. Related GAO Products Medicaid Assisted Living Services: Improved Federal Oversight of Beneficiary Health and Welfare Is Needed. GAO-18-179. Washington, D.C.: January 5, 2018. Medicaid: CMS Should Take Additional Steps to Improve Assessments of Individuals’ Needs for Home- and Community-Based Services. GAO-18-103. Washington, D.C.: December 14, 2017. Medicaid Managed Care: CMS Should Improve Oversight of Access and Quality in States’ Long-Term Services and Supports Programs. GAO-17-632. Washington, D.C.: August 14, 2017. Medicaid: CMS Needs Better Data to Monitor the Provision of and Spending on Personal Care Services. GAO-17-169. Washington, D.C.: January 12, 2017. Medicaid Managed Care: Improved Oversight Needed of Payment Rates for Long-Term Services and Supports. GAO-17-145. Washington, D.C.: January 9, 2017. Medicaid Personal Care Services: CMS Could Do More to Harmonize Requirements across Programs. GAO-17-28. Washington, D.C.: November 23, 2016. Long-Term Care Workforce: Better Information Needed on Nursing Assistants, Home Health Aides, and Other Direct Care Workers. GAO-16-718. Washington, D.C.: August 16, 2016. Older Adults: Federal Strategy Needed to Help Ensure Efficient and Effective Delivery of Home- and Community-Based Services and Supports. GAO-15-190. Washington, D.C.: May 20, 2015. Medicaid: States’ Plans to Pursue New and Revised Options for Home- and Community-Based Services. GAO-12-649. Washington, D.C.: June 13, 2012.
Why GAO Did This Study The need for LTSS to assist individuals with limited abilities for self-care is expected to increase, in part due to the aging of the population. Medicaid is the nation's primary payer of LTSS, with spending estimated at $167 billion in 2016. State Medicaid programs are generally required to cover LTSS provided in institutions, such as nursing homes, but coverage of the same services outside of institutions—that is, HCBS—is generally optional. In recent years there have been efforts to shift the balance of LTSS away from institutions through the expanded use of HCBS. National spending for HCBS has increased and now exceeds that for services in an institution. However, the extent to which Medicaid programs cover HCBS varies by state, as does the structure of states' HCBS programs. GAO was asked to review the approaches states use to provide coverage for HCBS in the Medicaid program. For selected states, this report describes (1) decisions that influenced the structure of Medicaid HCBS programs, and (2) challenges providing HCBS to Medicaid beneficiaries and efforts to respond to these challenges. GAO reviewed information and conducted interviews with officials from a nongeneralizable sample of five states, which GAO selected to obtain variation in the percentage of total Medicaid LTSS expenditures used for HCBS, geography, and other factors. GAO also reviewed information and interviewed officials from four MCOs—two in each of the two selected states that used managed care to provide HCBS. The MCOs varied in enrollment size and population served. What GAO Found All state Medicaid programs finance coverage of long-term services and supports (LTSS), which help beneficiaries with physical, cognitive, or other limitations perform routine daily activities, such as eating, dressing, and making meals. When these services are provided in beneficiaries' homes or other community settings instead of nursing homes, the services are known as home- and community-based services (HCBS). The structure of the 26 HCBS programs we reviewed in five states—Arizona, Florida, Mississippi, Montana, and Oregon—reflected decisions about which populations to cover, whether to limit eligibility or enrollment, and whether to use managed care. Populations: Four of the five states had multiple HCBS programs that targeted specific populations. For example, Mississippi had separate HCBS programs for aged or physically disabled individuals and individuals with intellectual or developmental disabilities. The fifth state, Arizona, had one program that targeted two specific populations. Eligibility: All five states had at least one HCBS program that limited eligibility to beneficiaries whose needs would otherwise require care in a nursing home or other institutional setting. Enrollment: Four of the five states limited enrollment in one or more of their HCBS programs; 19 of the 26 programs had enrollment caps, and 12 of these programs maintained a waiting list. Managed care: Two of the five states used managed care to provide HCBS, paying managed care organizations (MCO) a fixed fee for each beneficiary rather than paying providers for each service delivered. State and MCO officials identified several challenges providing HCBS and described their efforts to respond to them: HCBS workforce: Officials cited challenges recruiting and retaining HCBS providers, particularly given the low wages these providers typically receive. To respond to this, officials from Mississippi, Montana, and two of the MCOs reported offering providers higher payment rates. Complex needs: Officials described challenges serving beneficiaries with complex medical and behavioral health needs, including individuals who display aggressive or other challenging behaviors. Officials from Montana and one MCO reported responding to this challenge by providing behavioral health training for providers. HCBS funding: State officials reported that limitations on overall HCBS funding levels posed a challenge, which they responded to by providing their state legislatures with information on the projected need for HCBS to inform future funding decisions, and leveraging other available resources, such as federal grants. The Department of Health and Human Services provided technical comments on a draft of this report, which GAO incorporated as appropriate.
gao_GAO-18-431T
gao_GAO-18-431T_0
Executive Branch Agencies Have Made Progress Reforming the Security Clearance Process, but Long-Standing Key Initiatives Remain Incomplete The PAC Has Made Progress Reforming the Personnel Security Clearance Process The PAC has made progress in reforming the personnel security clearance process and implementing various security clearance reform initiatives. For example, the PAC has taken action on 73 percent of the recommendations of a February 2014 review conducted in the wake of the Washington Navy Yard shooting. Actions in response to these recommendations included ODNI and OPM jointly issuing Quality Assessment Standards in January 2015, which establish federal guidelines for assessing the quality of investigations. Additionally, ODNI developed the Quality Assessment Reporting Tool, through which agencies will report on the completeness of investigations. Similarly, the PAC reported quarterly on the status and progress of key initiatives, as part of the Insider Threat and Security Clearance Reform cross-agency priority goal. This reporting included the milestone due date and status for each initiative. According to PAC Program Management Office officials, although the data are no longer publicly reported, they have continued to track the status of these milestones internally, and identified almost half of the initiatives—16 of 33—as complete as of the third quarter of fiscal year 2017. Additionally, the PAC has issued three documents that serve as its updated strategic framework for the next 5 years. In July 2016, it issued its Strategic Intent for Fiscal Years 2017 through 2021, which identifies the overall vision, goals, and 5-year business direction to achieve an entrusted workforce. In October 2016, it issued an updated PAC Enterprise IT Strategy, which provides the technical direction to provide mission-capable and secure security, suitability, and credentialing IT systems. According to PAC program management officials, the third document—PAC Strategic Intent and Enterprise IT Strategy Implementation Plan—was distributed to executive branch agencies in February 2017. Further, we reported in December 2017 that PAC members noted additional progress in reforming the personnel security clearance process, such as the development of Security Executive Agent Directives, the identification of executive branch—wide IT shared service capabilities, and the standardization of adjudicative criteria. Long-Standing Key Reform Initiatives Remain Incomplete Although the PAC has reformed many parts of the personnel security clearance process, the implementation of certain key initiatives, including the full implementation of the 2012 Federal Investigative Standards and the development of government-wide performance measures for the quality of investigations, remain incomplete. The Federal Investigative Standards outline criteria for conducting background investigations to determine eligibility for a security clearance, and are intended to ensure cost-effective, timely, and efficient protection of national interests and to facilitate reciprocal recognition of the resulting investigations. However, the standards also changed the frequency of periodic reinvestigations for certain clearance holders and include continuous evaluation as a new requirement for certain clearance holders. Continuous evaluation is a key executive branch initiative to more frequently identify and assess security-relevant information, such as criminal activity, between periodic reinvestigations. Continuous evaluation is a process to review the background of an individual who has been determined to be eligible for access to classified information or to hold a sensitive position at any time during the period of eligibility. Continuous evaluation involves automated record checks conducted on a more frequent basis, whereas periodic reinvestigations are conducted less frequently and may include, among other things, subject and reference interviews. The types of records checked as part of continuous evaluation are the same as those checked for other personnel security purposes. Security-relevant information discovered in the course of continuous evaluation is to be investigated and adjudicated under the existing standards. Efforts to implement an executive branch continuous evaluation program go back to at least 2008, with a milestone for full implementation by the fourth quarter of fiscal year 2010. In November 2017, we reported that while ODNI has taken an initial step to implement continuous evaluation in a phased approach across the executive branch, it had not determined when the future phases of implementation will occur. We recommended, among other things, that the Director of National Intelligence develop an implementation plan. ODNI generally concurred with that recommendation. Regarding government-wide measures for the quality of background investigations, as noted earlier, ODNI and OPM issued the Quality Assessment Standards and ODNI issued the Quality Assessment Reporting Tool. The Quality Assessment Standards established federal guidelines for assessing the quality of investigations. The Quality Assessment Reporting Tool is a tool through which agencies will report on the completeness of investigations. However, measures for quality have not been developed, and it is unclear when this key effort will be completed. The original milestone for completing government-wide measures was fiscal year 2010, and no new milestone has been established. In our December 2017 report, we recommended that the Director of National Intelligence, in his capacity as the Security Executive Agent, and in coordination with the other PAC Principals, establish a milestone for the completion of government-wide performance measures for the quality of investigations. ODNI disagreed with the recommendation, stating that it is premature to establish such a milestone and that it will do so once the Quality Assessment Reporting Tool metrics have been fully analyzed. We continue to believe that setting a milestone, which takes into consideration the amount of time needed to analyze Quality Assessment Reporting Tool data, will help to ensure that the analysis of the data is completed, initial performance measures are developed, and agencies have a greater understanding of what they are being measured against. Agencies Meeting Timeliness Objectives for Clearances Decreased, and a Government-Wide Approach Has Not Been Developed to Improve Timeliness or Address the Backlog Our analysis of government-wide and agency-specific data shows a decline in the number of executive branch agencies meeting the timeliness objectives for processing clearances. While ODNI has taken steps to address timeliness challenges, it has not developed a government-wide approach to help agencies improve the timeliness of initial personnel security clearances. Additionally, the backlog of background investigations conducted by NBIB—the primary entity responsible for conducting background investigations—has steadily increased since 2014 and as of February 2018 exceeds 710,000 cases. NBIB personnel are attempting to decrease the backlog by making the background investigation process more effective and efficient and increasing investigator capacity. However, NBIB faces challenges in developing a plan to reduce the size of the investigation backlog to a manageable level. Agencies Meeting Timeliness Objectives Decreased Our analysis showed that the percentage of executive branch agencies meeting timeliness objectives for investigations and adjudications decreased from fiscal years 2012 through 2016. The Intelligence Reform and Terrorism Prevention Act of 2004 (IRTPA) established an objective for each authorized adjudicative agency to make a determination on at least 90 percent of all applications for a personnel security clearance within an average of 60 days after the date of receipt of the completed application by an authorized investigative agency. The objective includes no longer than 40 days to complete the investigative phase and 20 days to complete the adjudicative phase. In assessing timeliness under these objectives, executive branch agencies exclude the slowest 10 percent and report on the average of the remaining 90 percent (referred to as the fastest 90 percent). As part of the Insider Threat and Security Clearance Reform cross- agency priority goal, the PAC reported quarterly on the average number of days to initiate, investigate, adjudicate, and complete the end-to-end process for initial secret and initial top secret cases and periodic reinvestigations for the executive branch as a whole from fiscal year 2014 through 2016. For fiscal year 2016, the PAC reported that the government-wide average for executive branch agencies did not meet the 40-day investigation objective for the fastest 90 percent of initial secret clearances for any quarter; the averages ranged from 92 days to 135 days; did not meet ODNI’s revised investigation objective for the fastest 90 percent of initial top secret clearances for any quarter; the averages ranged from 168 days to 208 days; did not meet the goal of conducting the investigative portion of periodic reinvestigations within 150 days for the fastest 90 percent of cases for any quarter; the averages ranged from 175 days to 192 days; and did not meet the goal of completing periodic reinvestigations—the end-to-end goal—within 195 days for any quarter of fiscal year 2016; the averages ranged from 209 days to 227 days. Our analysis of timeliness data for specific executive branch agencies showed that the percentage of agencies meeting established investigation and adjudication timeliness objectives for initial secret and top secret personnel security clearances and periodic reinvestigations decreased from fiscal year 2012 through 2016. We found that agencies with delegated authority to conduct their own investigations and those that use NBIB as their investigative provider experienced challenges in meeting established investigative timeliness objectives. Specifically, in fiscal year 2012, we found that 73 percent of the agencies, for which we obtained data, did not meet investigation and adjudication objectives for at least three of four quarters for initial secret clearances, 41 percent did not meet those objectives for initial top secret 16 percent did not meet the investigative goal for at least three of four quarters for the fastest 90 percent of periodic reinvestigations. By fiscal year 2016, the percentage of agencies that did not meet these same objectives had increased to 98 percent, 90 percent, and 82 percent, respectively. Furthermore, ODNI requests individual corrective action plans from agencies not meeting security clearance timeliness objectives. However, the executive branch has not developed a government-wide plan, with goals and interim milestones, to meet established timeliness objectives for initial security clearances that takes into consideration increased investigative requirements and other stated challenges. In our December 2017 report, we recommended that the Director of National Intelligence, as Security Executive Agent, develop a government-wide plan, including goals and interim milestones, to meet timeliness objectives for initial personnel security clearance investigations and adjudications. Although the DNI did not specifically comment on this recommendation, we continue to believe a government-wide plan would better position ODNI to identify and address any systemic government-wide issues. We also recommended that the Director of National Intelligence conduct an evidence-based review of the investigation and adjudication timeliness objectives and take action to adjust the objectives if appropriate. He did not agree with this recommendation and stated that it is premature to revise the existing timeliness goals until NBIB’s backlog is resolved. We continue to believe that our recommendation to conduct an evidence- based review, using relevant data, is valid. As we noted in our report, even agencies with delegated authority to conduct their own investigations are experiencing challenges meeting established timeliness objectives. We also noted that ODNI has not comprehensively revisited the investigation or adjudication timeliness objectives for initial security stemming from the implementation of the 2012 Federal Investigative Standards. Backlog of Background Investigations Has Steadily Increased since 2014 The executive branch’s challenges in meeting investigation timeliness objectives for initial personnel security clearances and periodic reinvestigations have contributed to a significant backlog of background investigations at the primary entity responsible for conducting background investigations, NBIB. NBIB personnel are attempting to decrease the backlog by making the background investigation process more effective and efficient. To do so, NBIB conducted a business process reengineering effort that was intended to identify challenges in the process and their root causes. Specifically, NBIB officials cited efforts that have been implemented to reduce the number of personnel hours necessary to complete an investigation, such as centralizing interviews and using video-teleconferencing for overseas investigations (to decrease travel time), automated record checks, and focused writing (to make reports more succinct and less time-consuming to prepare). However, NBIB has not identified how the implementation of the business process reengineering effort will affect the backlog or the need for additional investigators in the future. In December 2017, we recommended that the Director of NBIB develop a plan, including goals and milestones, that includes a determination of the effect of the business process reengineering efforts on reducing the backlog to a “healthy” inventory of work, representing approximately 6 weeks of work. NBIB concurred with this recommendation. NBIB documentation shows that the backlog of pending investigations increased from about 190,000 in August 2014 to more than 710,000 as of February 2018, as shown in figure 1. NBIB’s Key Performance Indicators report states that a “healthy” inventory of work is around 180,000 pending investigations, representing approximately 6 weeks of work, and would allow NBIB to meet timeliness objectives. ODNI officials stated that several significant events contributed to agency challenges in meeting timeliness objectives over the past 5 fiscal years, including a government shutdown, the 2015 OPM data breach, a loss of OPM contractor support, and OPM’s review of the security of its IT systems, which resulted in the temporary suspension of the web-based platform used to complete and submit background investigation forms. In addition, executive branch agencies noted the increased investigative requirements stemming from the 2012 Federal Investigative Standards as a further challenge to meeting established timeliness objectives in the future. While NBIB has taken steps to increase its capacity to conduct background investigations by increasing its own investigator staff as well as awarding new contracts, in our December 2017 report we noted that NBIB officials have assessed four scenarios, from the status quo— assuming no additional contractor or federal investigator hires—to an aggressive contractor staffing plan beyond January 2018. The two scenarios that NBIB identified as most feasible would not result in a “healthy” inventory level until fiscal year 2022 at the earliest. In our December 2017 report, we recommended that the Director of NBIB establish goals for increasing total investigator capacity—federal employees and contractor personnel—in accordance with the plan for reducing the backlog of investigations, as noted above. NBIB concurred with this recommendation. The Potential Effects of Continuous Evaluation on Executive Branch Agencies Are Unknown We reported in November 2017 that the potential effects of continuous evaluation on executive branch agencies are unknown because future phases of the program and the effect on agency resources have not yet been determined. ODNI has not yet determined key aspects of its continuous evaluation program, which has limited the ability of executive branch agencies to plan for implementation in accordance with ODNI’s phased approach. For example, while ODNI has initiated the first phase of continuous evaluation in coordination with implementing executive branch agencies, it has not yet determined what the future phases of implementation will entail, or when they will occur. As we reported in November 2017, the uncertainty regarding the requirements and time frames for the future phases of the program has affected the ability of executive branch agencies to plan to implement continuous evaluation and estimate the associated costs. Although executive branch agencies have identified increased resources as a risk associated with implementing continuous evaluation, and ODNI has acknowledged that risk, ODNI, in coordination with the PAC, has not assessed the potential effects of continuous evaluation on an agency’s resources. Further, ODNI has not developed a plan, in consultation with implementing agencies, to address such effects, including modifying the scope or frequency of periodic reinvestigations or replacing periodic reinvestigations for certain clearance holders. Moreover, the potential effect of continuous evaluation on periodic reinvestigations is unknown. Executive branch agencies have expressed varying views about potential changes to the periodic reinvestigation model: DOD officials stated that with workload and funding issues, they see no alternative but to replace periodic reinvestigations for certain clearance holders with continuous evaluation, as the record checks conducted are the same for both processes. State Department officials expressed concerns that relevant information, such as state and local law-enforcement records that are not yet automated, would be missed if it did not conduct periodic reinvestigations. State Department officials, along with officials from the Departments of Justice and Homeland Security, stated it may be possible to change the frequency or scope of periodic reinvestigations at some point in the future. The Security Executive Agent Directive for continuous evaluation, issued since our report, clarified that continuous evaluation is intended to supplement but not replace periodic reinvestigations. In our November 2017 report, ODNI officials stated that ODNI is not opposed to further improving the security clearance process, and that once continuous evaluation is operational, it plans to determine the efficiencies and mitigation of risks associated with the approach. Specifically, these officials stated that once continuous evaluation is further implemented and ODNI has gathered sufficient data—which they estimated would take about a year from May 2017—they can perform analysis and research to determine whether any changes are needed to the periodic reinvestigation model. We recommended that the Director of National Intelligence assess the potential effects of continuous evaluation on agency resources and develop a plan, in consultation with implementing agencies, to address those effects, such as modifying the scope of periodic reinvestigations, changing the frequency of periodic reinvestigations, or replacing periodic reinvestigations for certain clearance holders. ODNI generally concurred with this recommendation. Finally, the National Defense Authorization Act for Fiscal Year 2018, enacted in December 2017, will have a significant impact on the personnel security clearance process. Among other things, the act authorized DOD to conduct its own background investigations and requires DOD to begin carrying out a related implementation plan by October 1, 2020. It also requires the Secretary of Defense, in consultation with the Director of OPM, to provide for a phased transition. These changes could potentially affect timeliness, the backlog, and other reform initiatives but the effect is unknown at this time. DOD’s investigations represent the majority of the background investigations conducted by NBIB. Chairman Burr, Vice Chairman Warner and Members of the committee, this concludes my prepared testimony. I look forward to answering any questions. GAO Contact and Staff Acknowledgements If you or your staff have any questions about this testimony, please contact Brenda S. Farrell at (202) 512-3604 or at farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. GAO staff who made key contributions to this testimony are Kimberly Seay (Assistant Director), James Krustapentus, Michael Shaughnessy, and John Van Schaik. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study The government-wide personnel security clearance process was designated as a high-risk area in January 2018 because it represents one of the highest management risks in government. This testimony focuses on, among other things, the extent to which executive branch agencies (1) made progress reforming the security clearance process, and (2) are meeting timeliness objectives and reducing NBIB's investigative backlog. GAO's statement is based on information from public versions of its reports issued in November 2017 on continuous evaluation of clearance holders and in December 2017 on clearance reform efforts. Information that ODNI and OPM deemed sensitive was omitted. For those reports, GAO reviewed Executive Orders and PAC strategic documents; obtained data from the Office of the Director of National Intelligence (ODNI) on the timeliness of initial clearances and periodic reinvestigations; and interviewed officials from ODNI, NBIB, and other agencies. What GAO Found Executive branch agencies have made progress reforming the security clearance process, but long-standing key initiatives remain incomplete. Progress includes the issuance of federal adjudicative guidelines and updated strategic documents to help sustain the reform effort. However, agencies still face challenges in implementing aspects of the 2012 Federal Investigative Standards—criteria for conducting background investigations—and in implementing a continuous evaluation program. In addition, while agencies have taken steps to establish government-wide performance measures for the quality of investigations, neither the Director of National Intelligence (DNI) nor the interagency Security, Suitability, and Credentialing Performance Accountability Council (PAC) have set a milestone for completing their establishment. GAO's analysis of timeliness data for specific executive branch agencies showed that the number of agencies meeting investigation and adjudication timeliness objectives for initial secret and top secret security clearances and periodic reinvestigations decreased from fiscal years 2012 through 2016. For example, while 73 percent of agencies did not meet timeliness objectives for initial clearances for three of four quarters in fiscal year 2012, 98 percent of agencies did not meet these objectives in fiscal year 2016. The DNI has not developed a government-wide plan, including goals and milestones, to help agencies improve timeliness. Agencies' challenges in meeting timeliness objectives have contributed to a significant backlog of background investigations at the agency that is responsible for conducting the majority of investigations, the National Background Investigations Bureau (NBIB). NBIB documentation shows that the backlog of pending investigations increased from about 190,000 in August 2014 to more than 710,000 as of February 2018, as shown below. NBIB leadership has not developed a plan to reduce the backlog to a manageable level. What GAO Recommends In November 2017 and December 2017, GAO made 12 recommendations to the DNI and the Director of NBIB, including setting a milestone for establishing measures for investigation quality, developing a plan to meet background investigation timeliness objectives, and developing a plan for reducing the backlog. NBIB concurred with the recommendations. The DNI concurred with some, but not all, of GAO's recommendations. GAO continues to believe they are valid.
gao_GAO-19-9
gao_GAO-19-9_0
Background Federal and State Marijuana Laws Marijuana refers to the dried leaves, flowers, stems, and seeds from the cannabis plant, which contains the psychoactive or mind-altering chemical delta-9-tetrahydrocannabinol (THC), as well as other related compounds. Marijuana is a controlled substance under federal law and is classified as a Schedule I drug—the most restrictive of categories of controlled substances by the federal government. The Controlled Substances Act of 1970, as amended, does not allow Schedule I drugs, including marijuana, to be dispensed with a prescription, and provides federal sanctions for the possession, manufacture, distribution, dispensing, or use of such drugs. However, as of July 2018, 32 states and the District of Columbia had passed voter initiatives or legislation legalizing marijuana for medical purposes under state or territorial law. Of these, nine states and the District of Columbia had also passed voter initiatives or legislation legalizing marijuana for recreational purposes under state or territorial law. In addition, another 15 states have laws only pertaining to the use of products containing cannabidiol, also known as CBD, one of the non- psychoactive ingredients in marijuana plants. Nonetheless, federal penalties remain, and some marijuana-related activity may also be illegal under state law, including in states that have legalized marijuana for medical or recreational purposes. Figure 1 shows a map of marijuana legalization under state or territorial law, as of July 2018. Illegal Marijuana Cultivation and Eradication Marijuana is the only major illegal drug grown domestically, according to DEA. Individuals and larger organized groups, such as drug trafficking organizations, establish outdoor and indoor grow sites to cultivate marijuana. Outdoor grow sites can be located on privately-owned land, such as residential yards, farms, and timber lands, and publicly-owned land, such as national forests, as shown in Figure 2. Indoor grow sites can be located in residential houses and larger warehouses. Previously, we, along with the U.S. Department of Agriculture’s (USDA) Office of Inspector General (OIG) have reported on the environmental effects of illegal marijuana cultivation on federal lands. For example, in 2010, we reported that illegal marijuana cultivation on federal lands can involve, among other things, the application of pesticides, herbicides, fertilizers, and other chemicals, including chemicals that may be banned in the United States; removal of natural vegetation; diversion of water from streams; and deposits of large amounts of trash and human waste. In 2018, USDA’s OIG reported that trash and chemicals such as pesticides and fertilizers may remain at eradicated marijuana grow sites on national forest lands for multiple years, partly due to the cost of cleanup, which can reach as high as $100,000. Figure 3 shows examples of environmental effects of illegal marijuana cultivation on federal lands in California and Georgia. Marijuana eradication operations can encompass the following activities: seizure and destruction of marijuana plants, seizure and destruction of processed marijuana—which is smokeable marijuana in the drying process, loose, or packaged; confiscation of weapons and assets; and apprehension of individuals at the grow site. Additionally, operations may include the removal of trash and infrastructure, such as propane tanks and irrigation tubing, from outdoor grow sites during or after eradication operations to reduce the likelihood that growers will return. DEA’s Domestic Cannabis Eradication/Suppression Program DEA established DCE/SP in 1981 to support participating state and local law enforcement agencies in their efforts to eradicate and suppress illegal, domestically-grown marijuana. Over the past three decades, DEA has provided support for marijuana eradication and suppression efforts through DCE/SP in all 50 states, Puerto Rico, Guam, and the U.S. Virgin Islands. In fiscal year 2018, DEA obligated DCE/SP funding to 125 participating agencies in 37 states. DEA’s Office of Operations Management, Investigative Support Section is responsible for the overall management and oversight of DCE/SP. Personnel from DEA’s field divisions and contractors are responsible for implementing DCE/SP in the field. Specifically, DEA field divisions assign a special agent to serve as DCE/SP coordinator for each state in its area of responsibility. DCE/SP coordinators are responsible for reviewing participating agencies’ annual strategic plans for DCE/SP, and approving certain purchase requests, among other things. DEA also contracts for analytical and administrative support for the program. The contract provides DEA with six personnel, referred to as regional contractors, whose primary duties include providing guidance to participating agencies on allowable program expenditures, and reviewing the information participating agencies report to DEA on their program expenditures and eradication and suppression activities. DEA’s implementation of DCE/SP is a multi-step process with activities performed by DEA and participating agencies during each step, as shown in Figure 4. Each year, DEA requests and receives funding for DCE/SP from DOJ’s Assets Forfeiture Fund. To participate in DCE/SP, a state or local law enforcement agency must apply and enter into a reimbursable funding agreement with DEA. Specifically, a participating agency must submit an annual strategic plan describing, among other things, how it intends to use DCE/SP funding to address the illegal domestic marijuana threat in its area of responsibility, and coordinate with other federal agencies, such as the Forest Service. DEA and the participating agency then sign a letter of agreement, whereby the participating agency agrees to eradicate and suppress illegal marijuana as part of DCE/SP, and DEA agrees to provide a specified amount of funding to the participating agency to defray the costs of those activities. This agreement also outlines program restrictions and requirements for participating agencies, which include only using DCE/SP funds to reimburse expenses that DEA has deemed allowable; obtaining approval from DEA prior to expending DCE/SP funds on certain items; submitting an expenditure report to DEA each quarter; and collecting and reporting to DEA information on its marijuana eradication and suppression activities. DEA Obligated Over $17 Million Annually on Average to DCE/SP in Recent Years; Participating Agencies Expended Most Funds on Aviation Support and Overtime DEA Obligated Between $12.4 Million and $22 Million Annually to DCE/SP from 2015 Through Fiscal Year 2018; Five States Were Obligated About Half of the Funds Each Year DEA obligated about $17.7 million annually on average to DCE/SP from 2015 through fiscal year 2018. As shown in Figure 5, the total amount of funding DEA obligated to DCE/SP decreased from $22 million in 2015 to $12.4 million in fiscal year 2017, and increased to $18 million in fiscal year 2018. During each year of the 4-year time frame we reviewed, DEA obligated most of the DCE/SP funds to support the marijuana eradication efforts of the participating agencies—for example, $14 million of the $18 million in fiscal year 2018 went to 125 participating agencies in 37 states, or approximately $378,000 on average per state. DEA obligated the remaining funds—for example, $4 million in fiscal year 2018—to pay for program support. This support includes payments for the following items: The DEA Aviation Division, which provided reconnaissance, surveillance, undercover operations, and marijuana eradication support to selected participating agencies, according to DEA documentation. The Aviation Division prioritized its support to participating agencies based upon their past eradication operations, the availability of aviation support provided by other participating agencies, and DCE/SP coordinators’ request for support. Equipment, travel, and training for DEA headquarters and field divisions to support eradication activities. Six regional contractors that provided administrative support to the program. Figure 5 also shows that in each year from 2015 through fiscal year 2018, about half of total DCE/SP funds went to participating agencies in five states. For example, in fiscal year 2018 DEA obligated 48 percent of these funds to participating agencies in California, Kentucky, Georgia, Texas, and Tennessee. Moreover, by magnitude, California, Kentucky, Georgia, and Tennessee were among the top five states in each of the 4 years we examined. DEA headquarters officials reported that they obligate funding to participating agencies based on various factors, including the agencies’ past performance, their level of matching investment in the program, and the approximate amount of illegal growing in an area. DEA headquarters officials noted that some marijuana grows may still be illegal under state and local law—even in those states that have legalized or regulated marijuana in some form under state or local law. As such, DEA has obligated funds to participating agencies in states with and without some form of marijuana legalization under state law. Participating Agencies Expended Most Funds on Aviation Support and Overtime in Recent Years Participating state and local agencies have expended DCE/SP funds on a range of items, as described below. However, we calculated that two items— aviation support and overtime —accounted for a large majority of their expenditures in each of the 3 years we reviewed from 2015 through fiscal year 2017. For example, participating agencies expended 46 percent on overtime and 38 percent on aviation support in fiscal year 2017, as shown in Figure 6. Aviation Support. Participating agencies expended 43 percent ($17.0 million) of their DCE/SP funds to rent aircraft or purchase fuel for aviation support from 2015 through fiscal year 2017, according to DEA data. For example, officials from a participating state agency in California reported expending DCE/SP funds to contract for the use of helicopters for at least 90 days per year, which they use to support marijuana eradication efforts across the state. Officials from participating local agencies in California reported that aircraft support is critical to their marijuana eradication efforts because it allows them to conduct aerial surveillance to detect possible marijuana grow sites, transport personnel in and out of grow sites in remote areas, and remove large quantities of marijuana plants from grow sites, as shown in Figure 7. Overtime. Participating agencies expended 40 percent ($16.0 million) of their DCE/SP funds to pay employee overtime from 2015 through fiscal year 2017, according to DEA data. Officials from a participating agency in Nevada told us that marijuana eradication is labor-intensive—in some cases involving long hikes and camping in the mountains—which can result in overtime costs. In addition, officials from a participating agency in Michigan told us that they expend DCE/SP funds to reimburse members of state task force teams for overtime costs incurred during their participation in marijuana eradication operations, which generally involves 1- to 3-hour extensions of their regular shifts. Travel and per diem. Participating agencies expended 6 percent ($2.3 million) of their DCE/SP funds to pay travel and per diem costs from 2015 through fiscal year 2017, according to DEA data. For example, officials from a participating agency in Nevada reported that traveling to marijuana grow sites in remote areas may take up to 6 hours, which requires them to incur travel and per diem costs for overnight stays. In addition, DEA headquarters officials reported that officials from participating agencies who attend the DCE/SP national strategic meeting are permitted to expend DCE/SP funds to pay for travel and per diem expenses. According to DEA headquarters officials, federal, state, and local officials from across the country attend the strategic meeting to discuss trends and issues related to illegal marijuana cultivation, and DCE/SP’s priorities and goals. Supplies, clothing, and protective gear. Participating agencies expended 3 percent ($1.1 million) of their DCE/SP funds to purchase supplies, and another 2 percent ($0.8 million) to purchase clothing and protective gear from 2015 through fiscal year 2017, according to DEA data. For example, officials from a participating agency in Texas reported expending DCE/SP funds to purchase machetes for cutting marijuana plants; cameras for taking pictures or filming at eradication sites; backpacks and hydration bladders; Global Positioning System devices for navigation; first aid kits; gloves to protect personnel from pesticides, fertilizers, and other hazardous chemicals; and heavy-duty pants and shirts, as shown in Figure 7. Equipment. Participating agencies expended 3 percent ($1.0 million) of their DCE/SP funds to purchase equipment from 2015 through fiscal year 2017, according to DEA data. For example, officials from participating agencies in Georgia, Kentucky, and Texas told us that they have expended DCE/SP funds to purchase all-terrain vehicles, which they use to help access marijuana grow sites more quickly than on foot, and help them to navigate difficult terrain, including mountainous areas. Figure 7 includes a photo of an all-terrain vehicle purchased with DCE/SP funds. All other expenditures. Participating agencies expended 2 percent ($0.6 million) of their DCE/SP funds on training, and another 1 percent ($0.4 million) on miscellaneous commercial contracts from 2015 through fiscal year 2017. Participating agencies also expended less than 1 percent of their DCE/SP funds on both container and space rental ($0.2 million) and vehicle rental ($0.1 million) from 2015 through fiscal year 2017. Factors that affect how participating agencies expended funds. Officials from participating agencies we spoke with in six selected states—California, Georgia, Kentucky, Michigan, Nevada, and Texas—as well as DEA and Forest Service, provided perspectives on factors that affected how participating agencies expended DCE/SP funds to support their marijuana eradication efforts. State marijuana legalization. Officials we spoke with said that they expended DCE/SP funds to help eradicate marijuana grow sites not in compliance with their state and local laws. For example, in Georgia—where medical or recreational marijuana has not been legalized under state law—state officials reported that they strive to eradicate all marijuana grow sites. By comparison, state and local officials in California—where medical and recreational use of marijuana is legal under state law—said that all of the grow sites they eradicate are in violation of state and local laws. These grow sites include those on public lands such as national forests, and private land that had been trespassed upon. Marijuana eradication on national forests. DEA requires participating agencies to coordinate with Forest Service when conducting DCE/SP-funded eradication efforts on national forests. Officials from Forest Service and participating agencies we spoke with reported that they coordinate with one another when planning and conducting marijuana eradication on national forests—and that some of these efforts are funded by DCE/SP. For example, Forest Service officials in Kentucky reported that they participate in planning meetings with the state’s marijuana eradication task force. During the eradication season, Forest Service conducts aerial surveillance in helicopters funded by the state police using DCE/SP funds, and assists with eradication operations when available. As another example, officials in Georgia reported expending DCE/SP funds to conduct aerial surveillance to detect possible marijuana grow sites on national forests. Officials from some participating agencies we spoke with reported that they were able to expend DCE/SP funds to assist Forest Service with the removal of infrastructure such as sleeping bags and irrigation tubes at marijuana grow sites on national forests. For example, officials from a participating state agency in California reported that they assist with the removal of basic infrastructure and chemicals when feasible. However, Forest Service is responsible for the removal of infrastructure and subsequent environmental reclamation on national forests. DEA Oversees Participating Agencies’ Expenditure of DCE/SP Funds in Various Ways, but Does Not Consistently Collect the Supporting Documentation DEA Provides Guidance, Pre-approves Purchases, Conducts On-Site Observations, and Reviews Information on Participating Agencies’ Expenditures to Help Ensure Compliance with Program Requirements DEA oversees participating agencies’ expenditure of DCE/SP funds in various ways to help ensure compliance with program requirements, including the following: Provides guidance. DEA provides participating agencies a copy of its DCE/SP Handbook which describes, among other things, information on allowable and non-allowable uses of DCE/SP funds. For example, the Handbook explains that participating agencies may expend DCE/SP funds to pay overtime costs of officers participating in eradication activities if the officers otherwise would be unable to participate, but may not expend DCE/SP funds to pay for employee benefits. In addition, participating agencies may expend DCE/SP funds on equipment, such as all-terrain vehicles and Global Positioning System devices, but not purchase body armor, firearms, or tasers. See Table 1 for additional information on allowable uses of DCE/SP funds. Pre-approves certain purchases. DEA pre-approves certain equipment purchases, and requires additional review procedures to pre-approve higher-cost items. According to DEA guidance and headquarters officials, participating agencies are required to submit a purchase request form to DEA for the purchase of all durable supplies, materials, and equipment. A participating agency must also attach supporting documentation along with the request form—including price quotes, a description of the items, and intended use. Purchases up to $2,500 are approved by the DCE/SP coordinator, while purchases greater than $2,500, or 10 percent or more of an agency’s obligated funds, also require approval from the DEA Special Agent in Charge in the applicable DEA field division, who then passes the request along to DEA headquarters officials for final approval. Conducts on-site observations. DEA headquarters officials told us that, as part of their oversight for fiscal year 2017, they conducted on-site observations of participating agencies in seven states at training events, eradication operations, and participating agencies’ facilities. DEA headquarters officials said that they selected the site visit locations based on participating agencies’ funding levels and input from DEA field officials, among other factors. According to these officials, site visits allowed DEA to observe participating agencies’ equipment and compare it with documentation on pre-approved equipment purchases and reported expenditures. DEA was unable to provide information about the location or results of site visits prior to fiscal year 2017 due to both a lack of documentation and recent personnel turnover. However, DEA began documenting the location and results of site visits for fiscal year 2017. According to officials, the site visits did not reveal instances of misuse of funds in fiscal year 2017. Officials noted that documenting site visits is an important practice that will help inform the program’s plans for future site visits, and could help DEA identify best practices for marijuana enforcement to share with participating agencies. In addition, some DCE/SP coordinators we spoke with said that on-site observations help them to oversee participating agencies’ expenditure of program funds in the field. For example, one DCE/SP coordinator said that he has daily on-site contact with participating agencies, and that although he had not observed any misuse of funds, his on-site presence would allow him to detect misuse if it were to occur. Reviews information on program expenditures. DEA’s DCE/SP Handbook requires participating agencies to submit cumulative quarterly expenditure reports specifying how much the agency expended in each of the allowable expense categories, such as overtime, aviation support, and equipment. DEA regional contractors are required to review quarterly expenditure reports, and sign and submit the reports to headquarters for further review. Headquarters officials told us that they may ask participating agencies to clarify reported expenditures, and DEA may withhold funding if necessary until any issues are resolved. DEA also requires participating agencies to provide supporting documentation, such as receipts, for certain expenses claimed in the end-of-year quarterly expenditure reports. DEA Does Not Consistently Collect Supporting Documentation for Participating Agencies’ Reported Expenditures Notwithstanding these efforts to oversee participating agencies’ expenditure of DCE/SP funds, DEA does not consistently collect supporting documentation from participating agencies regarding their reported DCE/SP expenditures. As noted above, participating agencies are required to submit a copy of a receipt or other supporting documentation for certain expense claimed in the end-of-year quarterly expenditure reports, and regional contractors are responsible for collecting this information. However, the DEA regional contractors we spoke with had differing understandings of DEA’s requirement regarding the collection of information on DCE/SP expenditures, and indicated to us that they are collecting varying levels of supporting documentation. For example, One regional contractor told us that DEA does not specify the completeness of supporting documentation that regional contractors are required to collect. Nonetheless, he still collects supporting documentation for all expenses, which in some cases may consist of 200 pages for a single quarterly expenditure report. Another regional contractor told us that he is required to collect quarterly expenditure reports, and participating agencies are required to maintain supporting documentation internally. He stated that the completeness of supporting documentation he collects varies by participating agency within his region. For example, one participating state agency in his region submits supporting documentation to DEA for pre-approved equipment purchases only, but maintains supporting documentation for other expenditures internally as required. In contrast, he explained, other participating agencies in his region provide supporting documentation for all expenditures, including aviation support and overtime. A third regional contractor said the only clear requirement DEA has regarding the collection of information on program expenditures is that regional contractors must collect supporting documentation for large equipment expenditures. However, he still collects supporting documentation for all expenditures, including overtime. A fourth regional contractor told us that he is only required to collect supporting documentation for equipment, material, supply, and clothing expenditures. Accordingly, he collects supporting documentation for these expenditures from all participating agencies in his region. Some participating agencies in his region provide supporting documentation for all their expenditures, including aviation support and overtime. Officials in headquarters told us that although they were not fully aware of these varying practices for collecting supporting documentation, they had confidence that participating agencies were maintaining documentation internally as required. Moreover, DEA headquarters officials told us that they expect regional contractors to collect supporting documentation for aviation support and overtime expenses when participating agencies submit their end-of-year quarterly expenditure report. However, it is our assessment that this expectation differs from DEA’s written requirement because the requirement does not include supporting documentation for overtime expenses. Based on the results of our audit work, DEA headquarters officials said that they had taken initial steps to address this issue. In particular, officials said that they plan to convene a working group to discuss a potential update to DEA’s requirements for the collection of supporting documentation after the eradication season in 2018. In addition, officials said they had met with regional contractors to discuss potential solutions to address this issue. However, DEA headquarters officials could not provide us with a plan for this effort. Standards for project management call for developing a plan with specific actions and time frames. By developing and implementing such a plan to ensure that regional contractors are implementing DEA’s requirement for collecting supporting documentation in the intended manner, DEA could have greater assurance that program funds are being expended appropriately. DEA Collects and Uses Information on Program Activities to Help Manage DCE/SP, but Should Strengthen Data Reliability, Clearly Document Goals, and Establish Measures DEA Collects and Uses Information on Number of Plants Eradicated and Other Program Activities to Help Manage DCE/SP DEA collects information from participating agencies and DEA field officials on their marijuana eradication and suppression activities to help manage DCE/SP, such as the number of marijuana plants eradicated, pounds of processed marijuana seized, and number of arrests made. For example, according to DEA’s DCE/SP statistical reports, over 4 million illegal domestic marijuana plants, on average, were eradicated annually from 2015 through fiscal year 2017. Participating agencies are required to report information on their marijuana eradication and suppression activities to DEA. DEA also collects information on marijuana eradication and suppression activities its officials conduct in the field. For example, DEA field officials may unilaterally conduct eradication and suppression activities or provide support to other law enforcement agencies that do not receive program funding (nonparticipating agencies) on marijuana enforcement efforts, and report information on these activities. According to DEA documents and headquarters officials, DEA uses this information to help manage the program in a variety of ways. Specifically, DEA uses the information to develop and maintain a national assessment of illegal domestic marijuana cultivation; inform the scope and nature of program activities for future years; support the program’s funding request and determine funding levels for participating agencies; and assess performance on an agency-wide objective related to dismantling drug trafficking organizations. DEA also reports this information on DCE/SP’s public website. Participating Agencies’ Practices for Reporting Some of Their Marijuana Eradication and Suppression Activities Differ from DEA Guidance We found that participating agencies have practices for reporting information on some of their marijuana eradication and suppression activities that differ from DEA’s written guidance. Moreover, we found that stakeholders at all levels—participating agencies as well as DEA field and headquarters officials—had varying understandings of what participating agencies are required to report to DEA for DCE/SP. As a result, the information DEA collects is not fully reliable for the purpose of assessing program performance. According to DEA guidance, participating agencies are required to report information—such as the number of marijuana plants eradicated—only from eradication and suppression activities funded by DCE/SP. However, among the six states we contacted, officials from participating agencies in four states and a DCE/SP coordinator from a fifth state told us that they also include information on activities from nonparticipating agencies in the information reported to DEA. As a result of this broadening of information being reported, DEA does not have a fully accurate representation of the activities being performed by agencies receiving DCE/SP funding. Officials from these five states told us that they included this information to provide DEA with a more comprehensive assessment of the illegal domestic marijuana cultivation issue in their area. DEA headquarters officials were not aware of this reporting practice. Moreover, officials said that participating agencies should only report information resulting from their DCE/SP-funded operations, which may include results from support they provide to nonparticipating agencies. For example, if a participating agency provides support to a nonparticipating agency in the form of aircraft surveillance to help identify illegal grow sites, or additional officers to assist with an eradication operation, the participating agency should report the results from those activities to DEA. However, these expectations are not defined in DEA guidance. DEA guidance also states that participating agencies should make every effort to not report eradication and suppression information resulting from interdiction activities, which are not considered DCE/SP-funded operations. For example, marijuana seized by a participating agency during a routine traffic stop—a type of interdiction activity—should not be reported. However, we found that participating agencies had varying understandings of whether or not to report this information to DEA. As a result, information DEA collects from these officials is not consistent. Specifically, we identified three different practices that participating agencies followed to report eradication and suppression information resulting from routine traffic stops: report marijuana seized during routine traffic stops only if the marijuana can be linked back to a domestic source; report all marijuana seized during routine traffic stops irrespective of source; and do not report any marijuana seized during routine traffic stops. Further, we found that DEA field officials responsible for providing guidance to participating agencies had varying understandings of whether participating agencies should report information on marijuana seized during routine traffic stops to DEA. For example, two DCE/SP coordinators told us that information resulting from routine traffic stops should not be reported because DCE/SP is focused on the eradication of illegal marijuana grow sites. However, 3 of the 4 DEA regional contractors we spoke with said that participating agencies should report information resulting from routine traffic stops only if the marijuana seized can be tracked to a domestic source. DEA headquarters officials were not aware of these differing reporting practices and varying understandings. Headquarters officials told us that they expect participating agencies to report information on marijuana seized during routine traffic stops only if the marijuana can be linked to a domestic source. However, our assessment is that this expectation is not consistent with DEA’s written guidance. Officials explained that interdiction activities, such as routine traffic stops, are relevant to marijuana suppression, especially in light of recent changes in illegal marijuana cultivation and trafficking trends. For example, according to DEA officials, Kansas—a state without marijuana legalization—has recently experienced a decrease in the number of illegal outdoor marijuana grow sites in conjunction with an increase in the amount of illegal domestic marijuana being trafficked into the state from Colorado— a state with recreational and medical marijuana legalization. Standards for Internal Control in the Federal Government state that management should use quality information—including accurate and consistent information—to achieve the entity’s objectives. Federal standards for internal control also state that management should communicate the necessary quality information internally and externally to achieve the entity’s objectives. Based on the results of our audit work, DEA headquarters officials said that they had taken initial steps and have additional plans to update DEA’s written guidance. For example, officials told us that they plan to convene a working group to help address this issue after the eradication season in 2018. This working group will, according to officials, elicit input from DEA headquarters, regional contractors and DCE/SP coordinators in the field, as well as participating agencies. However, DEA headquarters officials could not provide us with any details or documentation of its initial steps and additional plans to address this issue. Clarifying the guidance and communicating it to participating agencies and DEA field officials—for example, by sharing the updated guidance with them, discussing reporting practices during its national strategic meeting, or including the guidance in DEA information systems—would help ensure the consistent application of the guidance, and as a result, improve the reliability of the information DEA collects. The improved information could help DEA assess program performance and manage the program more effectively. DEA Has Not Clearly Documented All of Its Program Goals, and Does Not Have Measures to Assess Performance Although DEA collects and uses information on DCE/SP activities to help manage the program, it has not clearly documented all of its program goals and has not developed performance measures to assess whether the agency is making progress towards achieving its goals. We did not find explicitly-labeled program goals in the DCE/SP Handbook, DEA budget justification documents, and DEA’s webpage which we reviewed. However, we found the following four statements which appeared to reflect program goals: 1. halt the spread of marijuana cultivation in the United States; 2. eradicate marijuana that is illegally cultivated by a person or drug trafficking organization; 3. disrupt and dismantle drug trafficking organizations and deprive these organizations of significant revenue streams; and 4. deter the illegal cultivation of marijuana through arrest, prosecution, incarceration of cultivators and seizure of drug-derived assets, and by making cultivation untenable due to increased law enforcement activities. DEA headquarters officials confirmed to us that the statements above reflected the goals of the program. However, they also described the following additional goals that are not explicitly defined in agency or program documentation: maximize the number of law enforcement agencies that participate in improve safety during operations through increased access to training and eradication schools; and share information on illegal marijuana cultivation among law enforcement agencies. Headquarters officials explained that because they are still relatively new to the program—having arrived in 2016—they had not yet documented these goals. Officials said they plan to document the program goals in the future, but did not provide specific time frames for doing so. Standards for Internal Control in the Federal Government state that management should define objectives clearly to enable identification of risks and define risk tolerances. Moreover, objectives are to be specific and measurable so they can be understood at all levels of the entity and that performance towards achieving those objectives can be assessed. Further, DEA has not developed performance measures with baselines, measurable targets, and linkage to program goals—several important attributes we have previously identified that performance measures should include if they are to be effective in monitoring progress and determining how well programs are achieving their goals. Baselines enable decision makers to assess the program’s performance over time. Identifying and reporting deviations from the baseline as a program proceeds provides valuable oversight by identifying areas of program risk and their causes to decision makers. Measurable targets help decision makers conduct assessments of whether program goals were achieved. Lastly, linkages between an organization’s goals and performance measures create a line of sight so that everyone understands how program activities contribute to the organization’s goals. DEA headquarters officials agreed that developing baselines to monitor trends in program performance over time would be useful for program management. However, officials said that setting measurable targets would be challenging because of factors outside of DEA’s control that may affect eradication efforts, including extreme weather events and changes in illegal marijuana cultivation and trafficking trends. However, DEA currently has performance measures with measurable targets for some of its drug enforcement-related programs and activities. For example, DEA has a performance measure with a measurable target for its agency-wide objective related to dismantling drug trafficking organizations—maximizing the monetary value of currency, property, and drugs seized. This performance measure reflects the outcomes of multiple activities across DEA, including DCE/SP. Further, while we agree that developing drug enforcement-related performance measures with measurable targets may be difficult, targets can help DEA evaluate past performance and make informed decisions about future operations, including allocating resources or developing strategies for the purpose of maintaining or improving performance. GPRAMA directs agencies to develop and document goals, as well as performance measures to assess progress towards their goals. While those requirements are applicable to the department or agency level (e.g., DOJ), we have previously reported that they can serve as leading practices at other organizational levels, including the program, project, or activity level. Agencies can use performance measurement to make various types of management decisions to improve programs and results, such as developing strategies and allocating resources, including identifying problems and taking corrective action when appropriate. Clearly documenting all program goals and developing performance measures with baselines, measurable targets, and linkage to program goals could provide DEA with the information it needs to assess progress and make informed decisions about current and future operations. Conclusions Despite states’ legalization of marijuana for medical or recreational purposes, illegal marijuana cultivation continues to occur. As the nation’s primary federal law enforcement agency for investigating and enforcing potential violations of controlled substance laws and regulations, DEA aims to halt the spread of illegal domestic marijuana cultivation. To accomplish this goal, DEA has provided financial assistance through DCE/SP to support participating state and local law enforcement agencies’ efforts to curb illegal domestic marijuana cultivation for almost four decades. These participating agencies have collectively eradicated several million illegal domestic marijuana plants annually in recent years. Nonetheless, DEA management can take further actions to improve its oversight of various aspects of the program. Specifically, by developing and implementing a plan with specific actions and time frames to ensure that DEA field staff are consistently implementing the agency’s requirements for collecting information on program expenditures, DEA will be better positioned to ensure that program funds are being expended appropriately. Additionally, by clarifying its guidance on the eradication and suppression activities participating agencies are required to report— and communicating the guidance to participating agencies and relevant DEA officials—DEA will have more reliable information to assess program performance and manage the program effectively. Finally, by clearly documenting program goals for DCE/SP and developing related performance measures with baselines, measurable targets, and linkage to those goals, DEA will be better able to assess the program’s performance over time and, if necessary, redirect resources to effective eradication and suppression efforts. Moving in this direction could help program investments achieve even greater results. Recommendations for Executive Action We are making the following four recommendations to DEA: The DEA Administrator should develop and implement a plan with specific actions and time frames to ensure that regional contractors are implementing DEA’s requirement for collecting documentation supporting participating agencies’ DCE/SP program expenditures in the intended manner. (Recommendation 1) The DEA Administrator should clarify DCE/SP guidance on the eradication and suppression activities that participating agencies are required to report, and communicate it to participating agencies and DEA officials responsible for implementing DCE/SP. (Recommendation 2) The DEA Administrator should clearly document all DCE/SP program goals. (Recommendation 3) The DEA Administrator should develop DCE/SP performance measures with baselines, targets, and linkage to program goals. (Recommendation 4) Agency Comments and Our Evaluation We provided a draft of this report to DOJ, including DEA, and USDA for review and comment. In its comments, reproduced in appendix II, DEA concurred with our recommendations and described planned actions to address them. DEA also provided technical comments, which we incorporated as appropriate. USDA told us that they had no comments on the draft report. In response to our first recommendation that DEA develop and implement a plan with specific actions and time frames to ensure that regional contractors are implementing DEA's requirement for collecting documentation supporting participating agencies' DCE/SP program expenditures in the intended manner, DEA concurred and stated that it will take measures to ensure that contract personnel are documenting and reporting expenditures in accordance with policy. Furthermore, DEA reported plans to update its DCE/SP Handbook by the end of the second quarter of fiscal year 2019 to provide uniform policy guidance on this matter. These actions, if implemented as described, should address the intent of our recommendation. DEA also concurred with our second recommendation that DEA clarify DCE/SP guidance on the eradication and suppression activities that participating agencies are required to report, and communicate it to participating agencies and DEA officials responsible for implementing DCE/SP. In its response, DEA reported plans to update the DCE/SP Handbook by the end of the second quarter of fiscal year 2019 so that the handbook clearly articulates the requirements and methods for reporting eradication and suppression data. Furthermore, DEA reported plans to conduct site visits and conference calls in the third and fourth quarters of fiscal year 2019 to communicate the requirements. These actions, if implemented as described, should address the intent of our recommendation. DEA concurred with our third recommendation that DEA clearly document all DCE/SP program goals. In its response, DEA reported plans to amend and document program goals for fiscal year 2019 and ensure that they are explicitly included in the DCE/SP Handbook and budget submissions. These actions, if implemented as described, should address the intent of our recommendation. DEA concurred with our fourth recommendation that DEA develop DCE/SP performance measures with baselines, targets, and linkage to program goals. In its response, DEA stated that it had identified performance measures for DCE/SP and convened an ongoing working group of subject matter experts to select a subset of these performance measures in order to better inform DCE/SP processes and management decision-making. These actions, if implemented as described, should address the intent of our recommendation. We are sending copies of this report to the appropriate congressional committees, the Attorney General, the DEA Administrator and the Secretary of Agriculture, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or goodwing@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Domestic Cannabis Eradication/Suppression Program Funds Obligated to and Expended by Participating Agencies, 2015 through Fiscal Year 2018 From 2015 through fiscal year 2018, the Drug Enforcement Administration (DEA) obligated about $56 million through its Domestic Cannabis Eradication/Suppression Program (DCE/SP) to state and local law enforcement agencies (participating agencies) in 43 states and the U.S. Virgin Islands to support their marijuana eradication and suppression activities. See table 2. In the table below, we also provide the status of marijuana legalization under state or territorial law, as of July 2018. Specifically, these categories include: recreational and medical legalization (R&M); medical legalization only (M); cannabidiol product access laws only (CBD); and no legalization (No). Appendix II: Comments from the Department of Justice Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Brett Fallavollita (Assistant Director), David Bieler (Analyst-in-Charge), Matthew T. Lowney, Billy Commons, Pamela Davidson, Steve Gaty, Eric Hauswirth, Benjamin Licht, Kimberly McGatlin, and Adam Vogt made key contributions to this report.
Why GAO Did This Study Marijuana is generally illegal under federal law. Nonetheless, an increasing number of states have legalized medical or recreational marijuana under state law. However, in these states, some marijuana-related activity may still be illegal under state law. Since 1981, DEA's DCE/SP has provided financial support to participating state and local agencies for their efforts to eradicate illegal marijuana. GAO was asked to review DEA's DCE/SP. This report examines (1) DCE/SP funding and expenditures in recent years, (2) how DEA ensures that participating agencies expend funds in accordance with program requirements, and (3) how DEA uses performance assessment to help manage DCE/SP. GAO analyzed DCE/SP guidance, and expenditure and performance information from 2015 through fiscal year 2017, and evaluated DEA's oversight and performance management efforts against internal control standards. GAO also interviewed officials from DEA, the U.S. Forest Service, and participating agencies in six states, which GAO selected to include varying levels of DCE/SP funding and numbers of marijuana grow sites eradicated in recent years. What GAO Found The Drug Enforcement Administration (DEA) obligated over $17 million annually on average from 2015 through 2018 to its Domestic Cannabis Eradication/ Suppression Program (DCE/SP)—which supports participating state and local law enforcement agencies' efforts to eradicate illegal marijuana. DEA obligated funds to participating agencies in states with and without marijuana legalization laws. Participating agencies expended the majority of funds on aviation support and overtime (see fig. below). Officials told GAO they expended funds to help eradicate marijuana that was not in compliance with state and local marijuana laws. For example, officials in California—a state with medical and recreational marijuana legalization laws—said that all of their eradication occurs on public lands such as national forests, or private land that had been trespassed upon. In total, agencies have eradicated several million plants annually in recent years. Participating Agencies' Top Domestic Cannabis Eradication/Suppression Program (DCE/SP) Expenditures in Recent Years DEA oversees participating agencies' compliance with program expenditure requirements in various ways, but does not consistently collect supporting documentation for expenditure reports. DEA field officials collect varying levels of documentation, and headquarters officials were not aware of these varying practices. DEA officials said they are now working to address this issue, but they have not developed a plan with specific actions and time frames for completion. By developing and implementing such a plan, DEA could have greater assurance that funds are being expended appropriately. DEA collects information on program activities to help manage DCE/SP, such as number of plants eradicated. However, participating agencies GAO spoke with have practices for reporting some program activities that differ from DEA's guidance due to varying interpretations of the guidance. As a result this information is neither fully accurate nor reliable for assessing program performance. Also, DEA has not clearly documented all of its program goals or developed performance measures to assess progress toward those goals. Improving the reliability of the information it collects, clearly documenting all program goals, and developing performance measures could provide DEA with the information it needs to manage the program more effectively. What GAO Recommends GAO is making four recommendations, including that DEA develop a plan to ensure the collection of consistent documentation of expenditures, clarify its guidance for reporting program activities, document all of its program goals, and develop performance measures. DEA concurred with the recommendations.
gao_GAO-18-469T
gao_GAO-18-469T_0
Experts and Stakeholders Have Proposed Restructuring EOIR’s Immigration Court System As we reported in June 2017, some immigration court experts and stakeholders have recommended restructuring EOIR’s administrative review and appeals functions within the immigration court system— immigration courts and BIA—and the Office of the Chief Administrative Hearing Officer, to improve the effectiveness and efficiency of the system or, among other things, increase the perceived independence of the system and professionalism and credibility of the workforce. We found that the 10 experts and stakeholders we interviewed generally supported one of the following scenarios for restructuring the immigration court system, all of which would require a statutory change to implement: a court system independent (i.e., outside) of the executive branch to replace EOIR’s immigration court system, including both trial and appellate tribunals; a new, independent administrative agency within the executive branch to carry out EOIR’s quasi-judicial functions with both trial-level immigration judges and an appellate level review board; or a hybrid approach, placing trial-level immigration judges in an independent administrative agency within the executive branch, and an appellate-level tribunal outside of the executive branch. Six of the 10 experts and stakeholders we interviewed supported restructuring the immigration court system into a court independent of the executive branch. Two of the experts and stakeholders we contacted supported a new independent administrative agency within the executive branch. One of the experts and stakeholders supported the hybrid scenario, placing trial-level immigration judges in an independent, administrative agency within the executive branch, and an appellate-level tribunal outside of the executive branch. As we reported in June 2017, experts and stakeholders offered several reasons for each of the proposed scenarios, such as potentially increasing judicial autonomy over courtrooms and dockets; as well as provided reasons against restructuring options, such as that restructuring may not resolve existing management challenges. These reasons for and against each of the scenarios are summarized in table 1 and discussed further below. We are not taking a position on any of these restructuring proposals, or on any of the reasons offered for or against them. We present the information we obtained from the experts and stakeholders to inform policymakers about proposals that have been put forth regarding restructuring the immigration court system. We found in our June 2017 report that experts and stakeholders we interviewed cited several reasons for the proposed restructuring scenarios, as described in table 1 and below. Independence: Six of the 10 experts and stakeholders we interviewed stated that establishing a court system independent (i.e., outside) of the executive branch could increase the perceived independence of the system. For example, 1 of the 10 experts and stakeholders we interviewed explained that the public’s perception of the immigration court system’s independence might improve with a restructuring that removes the quasi-judicial functions of the immigration courts and the BIA from DOJ, because DOJ is also responsible for representing the government in appeals to the U.S. Circuit Courts of Appeals by individuals seeking review of final orders of removal. Another 1 of the 10 experts and stakeholders we interviewed explained that under the existing immigration court system, respondents may perceive, due to the number of immigration judges who are former DHS attorneys and the co-location of some immigration courts with DHS U.S. Immigration and Customs Enforcement’s Office of the Principal Legal Advisor offices, that immigration judges and DHS attorneys are working together. Two of the 10 experts and stakeholders we interviewed also proposed that an immigration court system independent of the executive branch would be less susceptible to political pressures within the executive branch. Experts and stakeholders cited similar independence-related reasons for supporting the administrative agency and hybrid scenarios. Judicial autonomy: Four of the 10 experts and stakeholders we interviewed stated that a court system independent of the executive branch might give immigration judges and BIA members more judicial autonomy over their courtrooms and dockets. For example, 1 of the 10 experts and stakeholders we interviewed stated that immigration judges in an independent court system would be able to file complaints against private bar attorneys directly with the state bar authority instead of filing the complaint with DOJ first, as required for immigration judges acting in their official capacity. EOIR officials explained that while immigration judges cannot directly file a complaint with the state bar authority, EOIR’s Disciplinary Counsel, which is charged with investigating these complaints, can file a complaint with the state bar on behalf of the immigration judge. Workforce professionalism or credibility: Experts and stakeholders also stated reasons why a court system independent of the executive branch might also improve the professionalism or credibility of the immigration court system’s workforce. For example, 1 of the 10 experts and stakeholders we interviewed explained that if the judge career path was improved under a restructuring such that immigration judges were able to advance to more prestigious judgeships, this could assist in attracting candidates to the immigration bench. Regarding the hybrid scenario, 1 of the 10 experts and stakeholders we interviewed noted that this proposal may attract a more diverse and balanced pool of candidates for immigration judge positions. Organizational capacity or accountability: Experts and stakeholders who supported a court system independent of the executive branch also cited enhanced organizational capacity or accountability as a reason for adopting this scenario. One of the 10 experts and stakeholders we interviewed explained that this type of restructuring may allow the immigration court system to improve its organizational capacity by changing the way it staffs its managerial and supervisory positions. For example, this individual explained that instead of placing immigration judges in managerial positions, EOIR could, as an independent court system, more easily attract and fill managerial positions with individuals who have experience in court management and public administration instead of placing immigration judges in these positions. Similarly, this same individual also noted that if the restructured immigration court system was placed within the purview of the Administrative Office of the U.S. Courts, which provides a wide range of support services to the federal judiciary (including administrative, technological and legal services), it could use its expertise in court management to assist with managing the system. In terms of enhancing organizational accountability, 1 of the 10 experts and stakeholders we interviewed explained that an independent court system could also increase the transparency of the performance evaluation system for immigration judges by incorporating feedback from court stakeholders, such as DHS and private bar attorneys, on the judges’ performance as well as increasing the transparency of the process for making complaints against immigration judges. According to this individual, the complaint process for other federal judges is more transparent and the judges are given an opportunity to address the complaint and appeal any decisions that resulted from the complaint. We also found in our June 2017 report that the experts and stakeholders we interviewed cited several reasons against the proposed restructuring scenarios, as described in table 1 and below. Appointment of immigration judges: Two of the 10 experts and stakeholders we interviewed noted that requiring the presidential nomination and Senate confirmation of immigration judges under an independent court system could further complicate and delay the hiring of new judges by making the appointment of additional judges more dependent on external parties. Administrative challenges: Two of the 10 experts and stakeholders we interviewed stated that it may be difficult to establish and administer a court system independent of the executive branch. Specifically, these experts and stakeholders expressed concern that the Administrative Office of the U.S. Courts may be reluctant to assume the vast responsibility of administering a newly created court system. Regarding administrative challenges associated with the establishment of an independent administrative agency, 1 of the 10 experts and stakeholders we interviewed explained that this scenario might be overly complicated to implement since EOIR would need to develop its own administrative functions outside of DOJ. According to another 1 of the 10 experts and stakeholders we interviewed, creating a hybrid court system may further complicate the administration of the immigration court system and potentially result in difficulties for respondents. Procurement of resources: Five of the 10 experts and stakeholders we interviewed expressed the concern that a restructured immigration court system, regardless of the scenario, would not be able to procure sufficient resources outside of DOJ. For example, 1 of the 10 experts and stakeholders noted that a restructured independent court or administrative agency might have less leverage outside of DOJ to compete for resources. Trial level disconnection from the appellate level: One of the 10 experts and stakeholders we interviewed stated that if the hybrid scenario were to be adopted, the trial level may become more disconnected from the appellate level, due to the placement of the immigration courts within the executive branch and the appellate body outside of the executive branch. Resolution of existing management challenges or case backlog: Two of the 10 experts and stakeholders we contacted stated that a court system independent of the executive branch may not address the immigration courts’ management challenges, such as the case backlog. For example, 1 of the 10 experts and stakeholders stated that the immigration court system would likely have a large caseload regardless of how it is structured. EOIR Has Initiated Actions to Improve Its Management of the Immigration Courts, but Additional Steps Are Needed to Address Long- Standing Challenges We also reported in June 2017 that EOIR could take several actions to address long-standing management and operational challenges and reduce the case backlog. In particular, we identified challenges related to, and made 11 recommendations to improve, EOIR’s workforce planning, hiring, performance assessment, and technology utilization. EOIR generally concurred with our recommendations, and, has initiated actions to address them. Overall, EOIR has fully implemented 1 recommendation but needs to take additional steps to fully implement the remaining 10 recommendations to help strengthen the agency’s management and help reduce the case backlog. Workforce Planning. In June 2017, we reported that EOIR could help address its case backlog and staffing challenges, such as by hiring more immigration judges to meet its authorized number of judges and through better workforce planning and hiring practices. During the course of our review we found that EOIR estimated staffing needs using an informal approach that did not account for long-term staffing needs, reflect EOIR’s performance goals, or account for differences in the complexity of court cases. For example, in developing its staffing estimate, EOIR did not calculate staffing needs beyond the next fiscal year or take into account resources needed to achieve the agency’s case completion goals. Furthermore, we found that, according to EOIR data, approximately 39 percent of all immigration judges were eligible to retire as of June 2017, but EOIR had not systematically accounted for these impending retirements in its staffing estimate. At the time of our review, EOIR had begun to take steps to account for long-term staffing needs, such as by initiating a workforce planning report and a study on the time it takes court staff to complete key activities. However, we found that these efforts did not align with key principles of strategic workforce planning that would help EOIR better address current and future staffing needs. EOIR officials also stated that the agency had begun to develop a strategic plan for fiscal years 2018 through 2023 that could address its human capital needs. We recommended that EOIR develop and implement a strategic workforce plan that addresses key principles of strategic workforce planning. EOIR agreed with our recommendation. In February 2018, EOIR officials told us that they had established a committee and working group to examine the agency’s workforce needs and would include workforce planning as a key component in EOIR’s forthcoming strategic plan. Specifically, EOIR officials stated that the agency had established the Immigration Court Staffing Committee in April 2017 to examine how to best leverage its existing judicial and court staff workload model to address its short- and long-term staffing needs, assess the critical skills and competencies needed to achieve future programmatic results, and develop strategies to address human capital gaps, among other things. In February 2018, EOIR officials stated that the agency replaced this committee, which had completed its work, with a smaller working group of human resource employees charged with addressing the agency’s strategic workforce planning. Additionally, EOIR officials stated that the agency was developing a strategic plan that includes human capital planning as a critical component, which will be used to guide workforce planning for the agency. These are positive steps, but to fully address our recommendation, EOIR needs to continue to develop, and then implement a strategic workforce plan that: (1) addresses the agency’s short- and long-term staffing needs; (2) identifies the critical skills and competencies needed to achieve future programmatic results; and (3) includes strategies to address human capital gaps. Once this strategic workforce plan is completed, EOIR needs to monitor and evaluate the agency’s progress toward its human capital goals. Hiring. Additionally, in our June 2017 report, we found that EOIR did not have efficient practices for hiring new immigration judges, which has contributed to immigration judges being staffed below authorized levels and to staffing shortfalls. For example, in fiscal year 2016, EOIR was allocated 374 immigration judge positions and had 289 judges on board at the end of the fiscal year. EOIR officials attributed these gaps to delays in the hiring process. Our analysis of EOIR hiring data supported their conclusion. Specifically, we found that from February 2014 through August 2016, EOIR took an average of 647 days to hire an immigration judge—more than 21 months. As a result, we recommended that EOIR (1) assess the immigration judge hiring process to identify opportunities for efficiency; (2) use the assessment results to develop a hiring strategy that targets short- and long-term human capital needs; and (3) implement any corrective actions related to the hiring process resulting from this assessment. In response to our report, EOIR stated that it concurred with our recommendation and was implementing a new hiring plan as announced by the Attorney General in April 2017 intended to streamline hiring. Among other things, EOIR stated that the new hiring plan sets clear deadlines for assessing applicants moving through different stages of the process and for making decisions on advancing applicants to the next stage, and allows for temporary appointments for selected judges pending full background investigations. In February 2018, EOIR indicated to us that it had begun to use the process outlined in its hiring plan to fill judge vacancies. The Attorney General also announced in April 2017 that the agency would commit to hire an additional 50 judges in 2018 and 75 additional judges in 2019. In January 2018, EOIR officials told us that the agency had a total of 330 immigration judges, an increase of 41 judges since September 2016. Hiring these additional judges is a positive step; however, EOIR remains below its fiscal year 2017 authorized level of 384 immigration judges based on funding provided in fiscal years 2016 and 2017. Additionally, the Consolidated Appropriations Act, 2018 provided funding for EOIR to hire at least 100 additional immigration judge teams, including judges and supporting staff, with a goal of fielding 484 immigration judge teams nationwide by 2019. To fully address our recommendation, EOIR will need to continue to improve its hiring process by (1) assessing the prior hiring process to identify opportunities for efficiency; (2) developing a hiring strategy targeting short- and long-term human capital needs; and (3) implementing corrective actions in response to the results of its assessment of the hiring process. Performance Assessment. Regarding EOIR’s performance assessment, we reported in June 2017 that EOIR had previously established performance monitoring activities and measures to assess aspects of the immigration courts, but it had eliminated several of these performance assessment mechanisms. EOIR also had goals for some cases it adjudicated, such as respondents in detention, but no longer had goals for most cases, including some cases it had prioritized for adjudication. For example, we found that EOIR did not have performance measures or goals for completing cases in which the respondent is not detained (non- detained cases), which comprised 83 percent of immigration courts’ total caseload from fiscal year 2010 through fiscal year 2015. To help EOIR more effectively monitor its performance and fully evaluate whether the immigration courts are achieving EOIR’s mission, we recommended that EOIR establish and monitor comprehensive case completion goals, including a goal for completing non-detained cases not captured by performance measures, and goals for cases it considers a priority. EOIR agreed with this recommendation and has taken steps to address it. For example, EOIR issued guidance in January 2018 to all immigration court staff that established the agency’s goals for each immigration court in adjudicating cases. In particular, EOIR identified in this guidance a case completion goal for non-detained cases: courts must complete 85 percent of all non-detained removal cases that do not qualify as a “status case” within 1 year of filing of the Notice to Appear (NTA) in court, reopening or recalendaring of the case, remand from the Board of Immigration Appeals, or notification of release from custody. According to this guidance, EOIR has also retained case completion goals for other categories it considers a priority, such as cases in which the respondent is detained and credible fear reviews. In its January 2018 guidance, EOIR stated that it will track these measures and the courts’ performance in meeting them as well as regularly auditing these measures. To fully address this recommendation, EOIR needs to monitor courts’ performance in meeting these goals. In June 2017, we also reported that EOIR collected information on the extent and reasons why immigration judges issue continuances— temporary adjournments of case proceedings until a different day or time—but did not systematically assess these data to identify and address potential operational challenges affecting the immigration courts or areas where immigration judges could benefit from additional guidance or training. An immigration judge may continue a case for good cause shown, such as to allow respondents to obtain legal representation or DHS to complete required background investigations and security checks. Our analysis of continuance records from fiscal year 2006 through fiscal year 2015 showed that the use of continuances had grown over time. Specifically, all types of continuances increased by 23 percent from fiscal year 2006 through fiscal year 2015 and operational continuances, such as those caused by a lack of foreign language interpretation or a video-teleconference (VTC) malfunction, increased by 33 percent over this same time period. We recommended that EOIR systematically analyze immigration court continuance data to identify and address any operational challenges faced by courts or areas for additional guidance or training. EOIR agreed with this recommendation and, in July 2017, issued updated guidance for immigration judges on fair and efficient docket management relating to the use of continuances. For instance, according to this guidance, judges must annotate the case worksheet on disposition of the case with a continuance code describing the reason for the continuance and court staff must ensure that each continuance code is accurately entered into the agency’s case management system for all cases. EOIR also issued guidance in October 2017 updating case continuance codes and their definitions to assist immigration judges in recording this information on the case worksheet. These are positive steps, and analyzing the use of continuances on a systematic basis would give EOIR greater insight into more widespread operational issues that the courts may be facing. To fully address our recommendation, EOIR will need to systematically analyze immigration court continuance data to identify and address any operational challenges faced by courts or areas for additional guidance or training. We also reported in June 2017 that EOIR could improve the reliability of its case management data and reports on case completion times by ensuring that court staff accurately record NTAs in a timely manner. We found that EOIR did not have guidance or data integrity efforts to ensure the timely and accurate recording of NTAs in its case management system, and that at least 16 percent of NTA dates were unreliable. EOIR uses NTA dates to calculate case completion times, which are used to assess court performance. The agency reports this information publicly in DOJ’s Annual Performance Report. We concluded that improving the reliability of NTA data would allow EOIR to provide more accurate information on case completion times to Congress and the public. We recommended that EOIR update its policies and procedures to promote the timely and accurate recording of NTAs. In response to our report, EOIR stated that it partially concurred with our recommendation and stated that it would continue to monitor the timeliness and accuracy of NTA recording, and implement corrective actions as needed. In January 2018, as part of its policy on case completion goals, EOIR also created a goal that 100 percent of all electronic and paper records be accurate and complete. This goal is a positive step, and updating policies and procedures to remind staff about the importance of timely and accurate recording of all NTAs would provide EOIR greater assurance that this goal could be consistently met. To fully address our recommendation, EOIR will need to update its policies and procedures to ensure the timely and accurate recording of NTAs. Technology Utilization. We also made several recommendations to EOIR in our June 2017 report to improve its technology utilization, including the agency’s oversight of the ongoing development of a comprehensive electronic-filing (e-filing) capability—a means of transmitting documents and other information to immigration courts through an electronic medium, rather than on paper. EOIR identified the implementation of an e-filing system as a goal in 2001, but has not, as of April 2018, fully implemented this system. In 2001, EOIR issued an executive staff briefing for an e-filing system that stated that only through a fully electronic case management and filing system would the agency be able to accomplish its goals. This briefing also cited several benefits of an e-filing system, including, among other things, reducing the data- entry, filing, and other administrative tasks associated with processing paper case files; and improving communication with external court stakeholders, such as respondents and attorneys, providing the ability to file court documents from private home and office computers. As we reported in June 2017, EOIR initiated a comprehensive e-filing effort in 2016—the EOIR Court and Appeals System (ECAS)—for which EOIR had documented policies and procedures governing how its primary ECAS oversight body—the ECAS Executive Committee—would oversee ECAS through the development of a proposed ECAS solution. However, we found that EOIR had not yet designated an entity to oversee ECAS after selection of a proposed solution during critical stages of its development and implementation. In our June 2017 report, we recommended that in order to help ensure EOIR meets its cost and schedule expectations for ECAS, the agency identify and establish the appropriate entity to oversee ECAS through full implementation. EOIR concurred and stated that it had selected and convened the EOIR Investment Review Board to serve as the ECAS oversight body with the Office of Information Technology directly responsible for the management of the ECAS program. EOIR officials told us in February 2018 that the board convened in October 2017 and January 2018 to discuss, among other things, the ECAS program. However, as we reported in June 2017, EOIR officials previously told us that the EOIR Investment Review Board was never intended to oversee ECAS implementation due to the detailed nature of this system’s implementation. EOIR has recently provided us with documentation related to its oversight of ECAS, which we are reviewing to help determine the extent to which EOIR has met the intent of our recommendation. Additionally, we recommended in June 2017 EOIR develop and implement a plan that is consistent with best practices for overseeing ECAS to better position the agency to identify and address any risks and implement ECAS in accordance with its cost, schedule, and operational expectations. As of April 2018, EOIR has not indicated that it has developed such a plan. In June 2017 we also reported on ways EOIR could enhance its VTC program. EOIR is authorized by statute to hold immigration removal proceedings through VTC. According to EOIR officials, EOIR largely uses VTC for hearings for detained individuals, including both master calendar and individual merits hearings. We reported in June 2017 that officials from all six of the immigration courts we visited identified challenges related to VTC hearings, including difficulties maintaining connectivity, hearing respondents, exchanging paper documents, conducting accurate foreign language interpretation, and assessing the demeanor and credibility of respondents and witnesses. We further found that EOIR had not, in accordance with best practices, (1) evaluated its VTC program to ensure that it is outcome-neutral, or (2) established a mechanism to solicit feedback and comments about VTC from those who use it regularly to assess whether it meets user needs. Therefore, we recommended that EOIR take three actions to provide further assurances that its use of VTC in immigration hearings is outcome-neutral, including that it collect more complete and reliable data related to its VTC use (e.g., the number of hearings it conducts by VTC) and use the data to assess any effects of VTC on immigration hearings. EOIR partially concurred with these actions and has since taken some steps to implement these recommendations, such as piloting a project to collect data on respondent appeals related to the use of VTC in their cases. Additionally, EOIR officials told us in August 2017 that the agency is studying how to collect more complete and reliable data on the number and type of hearings it conducts through VTC and use these and other data to assess any effects of VTC on immigration hearings. We also recommended that EOIR develop and implement a mechanism to solicit and monitor feedback from respondents regarding their satisfaction and experiences with VTC hearings. EOIR concurred and implemented this recommendation in December 2017 by establishing a mechanism on its public website to solicit open-ended feedback from respondents regarding their satisfaction with VTC hearings, including the audio and visual quality of the hearing. According to EOIR officials, a group of individuals within EOIR’s Office of the Chief Immigration Judge is responsible for monitoring and addressing feedback received through this portal. These efforts should help EOIR ensure VTC hearings it conducts meet all user needs and identify and address technical issues with VTC hearings. Chairman Cornyn and Ranking Member Durbin, this completes my prepared statement. I would be happy to respond to any questions you or the members of the committee may have. GAO Contact and Staff Acknowledgments If you or your staff have any questions about this testimony, please contact Rebecca Gambler at (202) 512-8777 or gamblerr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Taylor Matheson (Assistant Director), Kathleen Donovan, Sasan J. “Jon” Najmi, Robin Nye, and Erin O’Brien. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study DOJ's EOIR is responsible for conducting immigration court proceedings, appellate reviews, and administrative hearings to fairly, expeditiously, and uniformly administer and interpret U.S. immigration laws and regulations. This statement addresses (1) scenarios that experts and stakeholders have proposed for restructuring EOIR's immigration court system and the reasons they offered for or against these proposals; and (2) how EOIR manages and oversees the immigration courts, including hiring and performance assessment, among other things. This statement is based on a report GAO issued in June 2017, with selected updates conducted through April 2018 to obtain information from EOIR on actions it has taken to address the report's recommendations. GAO's report incorporated information obtained by reviewing EOIR documentation, analyzing EOIR data, and interviewing agency officials and immigration court experts and stakeholders. For the selected updates, GAO reviewed EOIR documentation. What GAO Found In June 2017, GAO reported that some immigration court experts and stakeholders have recommended restructuring the Executive Office for Immigration Review's (EOIR) administrative review and appeals functions within the immigration court system—immigration courts and Board of Immigration Appeals—to improve its effectiveness and efficiency. The 10 experts and stakeholders GAO interviewed stated that they generally supported one of the following scenarios for restructuring the immigration court system, all of which would require a statutory change to implement: a court system outside of the executive branch to replace EOIR's immigration court system, including both trial and appellate tribunals; a new, independent administrative agency within the executive branch to carry out EOIR's quasi-judicial functions with both trial-level immigration judges and an appellate level review board; or a hybrid approach, placing trial-level immigration judges in an independent administrative agency within the executive branch, and an appellate-level tribunal outside of the executive branch. Six of the 10 experts and stakeholders GAO interviewed supported restructuring the immigration court system into a court independent of the executive branch. Experts and stakeholders offered several reasons for each of the proposed scenarios, such as potentially improving workforce professionalism and credibility. They also provided reasons against restructuring options, including that restructuring may not resolve existing management challenges, such as difficulties related to hiring immigration judges. GAO also reported in June 2017 that EOIR could take several actions to address management challenges. EOIR has since taken some steps to address these challenges, but additional actions are needed. For example, GAO found that EOIR did not have efficient practices for hiring immigration judges, which contributed to judges being staffed below authorized levels. EOIR hiring data showed that on average from February 2014 through August 2016, EOIR took more than 21 months to hire an immigration judge. GAO recommended that EOIR assess the immigration judge hiring process to identify opportunities for efficiency. As of January 2018, EOIR had increased the number of its judges but remained below its authorized level for fiscal year 2017. Hiring additional judges is a positive step; however, to fully address GAO's recommendation, EOIR needs to assess its hiring process to identify opportunities for efficiency. In June 2017, GAO also reported on ways EOIR could enhance its video teleconferencing (VTC) program, through which judges conduct hearings by VTC. GAO found that EOIR had not, in accordance with best practices, established a mechanism to solicit feedback and comments about VTC from those who use it regularly to assess whether it meets user needs. GAO recommended EOIR develop and implement such a mechanism. EOIR concurred and implemented this recommendation in December 2017 by establishing a mechanism on its public website to solicit feedback from respondents regarding their satisfaction with VTC hearings. This effort should help EOIR ensure VTC hearings it conducts meet all user needs. What GAO Recommends In its June 2017 report GAO made 11 recommendations to improve EOIR's hiring process and performance assessment, among other things. EOIR generally concurred with the recommendations, has implemented 1, and reported actions planned or underway to address the remaining 10.
gao_GAO-18-421
gao_GAO-18-421_0
Background Under SBA’s 7(a) loan program, SBA guarantees loans made by commercial lenders to small businesses for working capital and other general business purposes. These lenders are mostly banks, but some are non-bank lenders, including small business lending companies— lenders whose lending activities are not subject to regulation by any federal or state regulatory agency, but were previously licensed by SBA and authorized to provide 7(a) loans to qualified small businesses. The guarantee assures the lender that if a borrower defaults on a loan, the lender will receive an agreed-upon portion (generally between 50 percent and 85 percent) of the outstanding balance. For a majority of 7(a) loans, SBA relies on lenders with delegated authority to approve and service 7(a) loans and to ensure that borrowers meet the program’s eligibility requirements. To be eligible for the 7(a) program, a business must be an operating for-profit small firm (according to SBA’s size standards) located in the United States and must meet the credit elsewhere requirement. Because the 7(a) program is required to serve borrowers who cannot obtain conventional credit at reasonable terms, lenders making 7(a) loans must take steps to ensure that borrowers meet the program’s credit elsewhere requirement. Because SBA relies on lenders with delegated authority to make these determinations, SBA’s oversight of these lenders is particularly important. However, we found in a 2009 report that SBA’s lack of guidance to lenders on how to document compliance with the credit elsewhere requirement was impeding the agency’s ability to oversee compliance with the credit elsewhere requirement. To improve SBA’s oversight of lenders’ compliance with the credit elsewhere requirement, we recommended in 2009 that SBA issue more detailed guidance to lenders on how to document their compliance with the credit elsewhere requirement. As a result, SBA revised its standard operating procedure to state that each loan file must contain documentation that specifically identifies the factors in the present financing that meet the credit elsewhere test, which we believe met the spirit of our recommendation. SBA’s current credit elsewhere criteria for determining 7(a) loan eligibility include the following factors: 1. the business needs a longer maturity than the lender’s policy permits; 2. the requested loan exceeds the lender’s policy limit regarding the amount that it can lend to one customer; 3. the collateral does not meet the lender’s policy requirements; 4. the lender’s policy normally does not allow loans to new businesses or businesses in the applicant’s industry; or 5. any other factors relating to the credit which, in the lender’s opinion, cannot be overcome except for the guarantee. When the 7(a) program was first implemented, borrowers were generally required to show proof of credit denials from banks that documented, among other things, the reasons for not granting the desired credit. Similar requirements remained in effect until 1985, when SBA amended the rule to permit a lender’s certification made in its application for an SBA guarantee to be sufficient documentation. This certification requirement remained when the rule was rewritten in 1996. SBA stated that it believed requiring proof of loan denials was demoralizing to small businesses and unenforceable by SBA. SBA and lender roles vary among 7(a) program categories—including regular 7(a), the Preferred Lenders Program, and SBA Express. Under the regular (nondelegated) 7(a) program, SBA makes the loan approval decision, including the credit determination. Under the Preferred Lenders Program and SBA Express, SBA delegates to the lender the authority to make loan approval decisions, including credit determinations, without prior review by SBA. For each 7(a) program category, lenders are required to ensure that borrowers meet the credit elsewhere requirement for all 7(a) loans. The maximum loan amount under the SBA Express program is $350,000, as opposed to $5 million for other 7(a) loans. The program allows lenders to utilize, to the maximum extent possible, their own credit analyses and loan underwriting procedures. In return for the expanded authority and autonomy provided by the program, SBA Express lenders agree to accept a maximum SBA guarantee of 50 percent. Other 7(a) loans generally have a maximum guarantee of 75 percent or 85 percent, depending on the loan amount. In fiscal year 2016, 1,991 lenders approved 7(a) loans, of which 1,321 approved at least one loan with some form of delegated authority. SBA’s Office of Credit Risk Management is responsible for overseeing 7(a) lenders, including those with delegated authority. SBA created this office in fiscal year 1999 to help ensure consistent and appropriate supervision of SBA’s lending partners. The office is responsible for managing all activities regarding lender reviews; preparing written reports; evaluating new programs; and recommending changes to existing programs to assess risk potential. Generally, the office oversees SBA lenders to identify unacceptable risk profiles using its risk rating system and enforce loan program requirements. According to SBA’s standard operating procedures, one of the agency’s purposes of its monitoring and oversight activities is to promote responsible lending that supports SBA’s mission to increase access to capital for small businesses. In the federal budget, the 7(a) program is generally required to set fees that it charges to lenders and borrowers at a level to cover the estimated cost of the program associated with borrower defaults (in present value terms). To offset some of the costs of the program, such as default costs, SBA assesses lenders two fees on each 7(a) loan. First, depending on the term of the loan, the guarantee fee must be paid by the lender within either 90 days of loan approval or 10 business days of the SBA loan number being assigned. This fee is based on the amount of the loan and the level of the guarantee, and lenders can pass the fee on to the borrower. Second, the servicing fee must be paid annually by the lender and is based on the outstanding balance of the guaranteed portion of the loan. The 7(a) program accounts for a small portion of total small business lending. According to a May 2017 report by the Consumer Financial Protection Bureau, the total debt financing available to small businesses was estimated to be $1.4 trillion. Of that amount, the Consumer Financial Protection Bureau estimated that about 7 percent was SBA loans, including 7(a) loans. SBA and some other researchers have suggested that there may be disparities in credit access among small businesses, based on characteristics of the borrower and firm. SBA lists as a strategic objective to “ensure inclusive entrepreneurship by expanding access and opportunity to small businesses and entrepreneurs in communities where market gaps remain.” In 2007, we reported that some studies had noted disparities among some races and genders in the conventional lending market, but the studies did not offer conclusive evidence on the reasons for those differences. Much of the research we reviewed in 2007 relied on the Board of Governors of the Federal Reserve System’s Survey of Small Business Finance, which was last implemented in 2003. Although this survey is no longer available, recently the 12 Federal Reserve Banks conducted the Small Business Credit Survey. In a series of reports based on the more recent survey, researchers found disparities in credit availability based on gender, the age of the firm, and minority status. Businesses That Were New, Women- Owned, or Located in Distressed Areas Received a Majority of 7(a) Loan Dollars over the Past 10 Years From fiscal years 2007 through 2016, a majority of loan dollars guaranteed under the 7(a) program went to small businesses that were new, partially or wholly owned by women, or located in a distressed area. As previously mentioned, recent studies we reviewed by the Federal Reserve Banks and other researchers suggest that certain small business borrowers—including businesses that are new or owned by women—have difficulty obtaining conventional small business loans, which may put them at a disadvantage. As shown in figure 1, almost two-thirds of loan dollars guaranteed under the 7(a) program for this period went to small businesses that were in these two categories or located in a distressed area. The remaining 37 percent of 7(a) loan dollars went to businesses that were established, solely male-owned, and not located in economically distressed areas. See appendixes II and III for additional data on 7(a) loans, such as the total volume, percentage of lending provided by year and by state, and other borrower characteristics, including SBA’s loan- and lender-level Small Business Risk Portfolio Solutions score (predictive score) information. In the following figures, we present more detailed data on 7(a) loans to small businesses based on their status as a new business; gender of ownership; location relative to economically distressed areas; and minority ownership for fiscal years 2007 through 2016. New businesses. As shown in figure 2, the percentage of 7(a) loans that went to new businesses decreased from 36 percent in fiscal year 2007 to 23 percent in fiscal year 2011 before increasing to 35 percent by 2016. Gender. From fiscal years 2007 through 2016, the share of the total value of approved 7(a) loans by gender of owner remained fairly consistent (see fig. 3). An average of 70 percent of the total loan value went to male- owned businesses, and the remaining 30 percent went to businesses that were majority (more than 50 percent) or partially (50 percent or less) owned by women. Economically distressed areas. SBA did not provide data on whether 7(a) loans go to businesses located in economically distressed neighborhoods. However, we used data from the American Community Survey for 2011 through 2015, the most recent version available at the time of our analysis, along with zip code information provided by SBA to determine the average poverty rate by zip code (see fig. 4). From fiscal years 2007 through 2016, the proportion of the total value of 7(a) loans approved that went to borrowers in economically distressed areas remained between 23 percent and 26 percent. We defined distressed areas as zip codes where at least 20 percent of the households had incomes below the national poverty line. Minority/Nonminority status of borrower. From fiscal years 2007 through 2016, the proportion of the total value of 7(a) loans approved that went to minority borrowers decreased overall—from 43 percent to 30 percent—with the lowest share at 24 percent in fiscal year 2010 (see fig. 5). The share of approved loan dollars that went to nonminority borrowers varied, increasing to 69 percent in fiscal year 2010 before decreasing to 56 percent in fiscal year 2016. Notably, the share of the total value of loans approved that went to borrowers whose race/ethnicity was categorized as undetermined increased from 5 percent in fiscal year 2007 to 13 percent in fiscal year 2016. This increase does not fully account for the declined share for minority borrowers. However, according to SBA officials, borrowers voluntarily provide self-reported information on race and ethnicity and therefore the associated trend data should be viewed with caution. SBA Has Processes in Place to Evaluate Lender Compliance, but Its Lender Reviews Do Not Document Reasons for Noncompliance SBA Conducts On-site and Targeted Lender Reviews to Evaluate Lender Compliance with the Credit Elsewhere Documentation Requirement SBA relies on on-site reviews as its primary mechanism for evaluating lenders’ compliance with the credit elsewhere requirement. The reviews are performed by third-party contractors with SBA staff participation and additional oversight from SBA. According to SBA’s standard operating procedures, these reviews are generally conducted every 12 to 24 months for all 7(a) lenders with outstanding balances on the SBA- guaranteed portions of their loan portfolios of $10 million or more, although SBA may conduct on-site reviews of any SBA lender at any time as it considers necessary. In fiscal year 2016, SBA conducted 40 on-site reviews of 7(a) lenders, representing approximately 35 percent of SBA’s total outstanding 7(a) loan portfolio. As part of SBA’s on-site reviews, reviewers judgmentally selected a sample of approximately 30 to 40 loan files using a risk-based approach. These loan files accounted for approximately 6 percent to 19 percent of each lender’s total gross SBA dollars in fiscal year 2016. For each lender, approximately 70 percent to 90 percent of the loan files in the sample were reviewed to evaluate compliance with the credit elsewhere requirement. According to SBA’s contractors, loans that were selected for other reasons, such as issues related to liquidation, were not required to be reviewed for credit elsewhere compliance. SBA requires lenders to provide a narrative to support the credit elsewhere determination in the credit memorandum included in each loan file. SBA’s standard operating procedures state that lenders must substantiate that credit is not available elsewhere by (1) discussing the criteria that demonstrate an identifiable weakness in a borrower’s credit and (2) including the specific reasons why the borrower does not meet the lender’s conventional loan policy requirements. In keeping with SBA’s documentation requirement, third-party contractors and SBA staff who conduct on-site reviews are supposed to assess whether lenders have adequately documented the credit elsewhere criteria and provided specific reasons supporting the criteria in the credit memorandum. According to SBA’s contractors, adequate documentation of the credit elsewhere determination in the credit memorandum would include not just which of the criteria a borrower met but also a discussion of the basis or justification for the decision. For example, if a lender determined that a borrower needed a longer maturity, the lender should explain in the credit memorandum the reasons why a longer maturity was necessary. SBA’s contractors also told us that they carefully review a lender’s loan policies in preparation for on-site reviews and refer to a lender’s policies throughout the reviews. Reviewers do not attempt to verify the evidence given in support of the credit elsewhere reason beyond the information provided in the credit memorandum. Based on our review of fiscal year 2016 reports, on-site reviews can result in three levels of noncompliance response: Finding: This is the most severe result and is associated with a corrective action for the lender to remedy the issue. Observation: This is a deficiency recorded in the review’s summary but may not warrant a corrective action for the lender. Deficiency Noted: This is the lowest level of response. It is a deficiency noted as part of the review that is not included in the review’s summary and also may not warrant a corrective action. According to SBA officials, SBA’s policy has been that any noncompliance with SBA loan program requirements results in a finding. However, according to SBA officials and our review of the fiscal year 2016 on-site review reports, if a single instance of noncompliance was identified in fiscal year 2016, SBA generally would not issue a finding. Instead, SBA’s contractors said they would attempt to determine whether that instance was an inadvertent error, such as by examining additional loan files. Lenders that are subject to corrective actions are generally required to submit a response to SBA within 30 days to document how they have addressed or plan to address the identified issues. SBA subsequently asks for documentation to show that the lender has remedied the issue, and in some cases will conduct another review that usually includes an assessment of 5 to 10 additional loan files to determine whether the credit elsewhere reason has been adequately documented. According to SBA officials, SBA may also review lenders’ compliance with corrective actions from recent on-site reviews during targeted reviews (discussed below) and delegated authority renewal reviews (for lenders with delegated authority). In addition to on-site reviews, SBA also monitors lenders’ compliance with the credit elsewhere requirement through targeted reviews (performed on- or off-site). Targeted reviews of a specific process or issue may be conducted for a variety of reasons at SBA’s discretion, including assessing a lender’s compliance with the credit elsewhere requirement. In fiscal year 2016, SBA conducted 24 targeted reviews that included an examination of lenders’ compliance with the credit elsewhere documentation requirement. For these reviews, SBA examined loan files for 5 judgmentally selected loans that were provided to SBA electronically, as well as copies of the credit elsewhere reasoning (among other underwriting documentation) for 10 additional recently-approved loans. SBA also conducts periodic off-site reviews that use loan- and lender- level portfolio metrics to evaluate the risk level of lenders’ 7(a) portfolios. According to agency officials, SBA also began using off-site reviews to evaluate lenders’ compliance with the credit elsewhere requirement in fiscal year 2016. In that year, SBA conducted off-site reviews of 250 lenders and required these lenders to report the credit elsewhere justification for a sample of 10 loans per lender that were identified by SBA’s selection process. Lenders were not required to provide supporting documentation, and SBA did not follow up with lenders or review loan files to ensure the validity of the self-reported reasons. According to SBA, off-site reviews followed the same procedures in fiscal year 2017 as in 2016 and that the agency planned to use the same procedures for these reviews in the future. According to the agency, it also routinely evaluates and revises its review processes and procedures. In addition, SBA’s Loan Guaranty Processing Center and National Guaranty Purchase Center conduct Improper Payments Elimination and Recovery Act and quality control reviews at the time of loan approval and at the time of guaranty purchase, respectively. These reviews examine the credit elsewhere requirement, among other issues. Lastly, since 2014 SBA’s Office of Inspector General has also examined whether high-dollar or early-defaulted 7(a) loans were made in accordance with rules; regulations; policies; and procedures, including the credit elsewhere requirement. SBA’s Lender Reviews in 2016 Identified a High Rate of Noncompliance with the Credit Elsewhere Documentation Requirement Our review of the on-site reviews conducted in fiscal year 2016 found that 17 of the 40 reviews—more than 40 percent—identified compliance issues with the credit elsewhere documentation requirement. Of those 17 reviews, 10 reviews resulted in a Finding (all with associated corrective 3 reviews resulted in an Observation (none with associated corrective actions or requirements), and 4 reviews resulted in a Deficiency Noted (one with an associated requirement). For all of the 17 on-site reviews that identified an instance of noncompliance, the issue was related to the lender’s documentation of the credit elsewhere criteria or justification. For example, one review found that the lender’s “regulatory practices demonstrate material noncompliance with SBA Loan Program requirements regarding documentation of the Credit Elsewhere Test.” Another review found that the lender “failed to demonstrate with adequate documentation that credit was not available elsewhere on reasonable terms and conditions.” For 2 of the 17 reviews, the issue was partly related to a discrepancy between the credit elsewhere justification used for some of the sample loan files and the lender’s own loan policy limits. With regard to SBA’s targeted reviews, 7 of 24 reviews (29 percent) conducted in fiscal year 2016 found a compliance issue with the credit elsewhere requirement. Of those 7 reviews, 6 reviews resulted in a Finding (all with associated corrective actions), 1 review resulted in an Observation (without an associated corrective no reviews resulted in a Deficiency Noted. For all of the 7 targeted reviews that identified a compliance issue, the issue was wholly related to the lender’s documentation of the credit elsewhere reason or justification. For example, 4 reviews found that for at least one loan reviewed, “the Lender failed to document justification that credit was unavailable elsewhere.” Another review found that for “three SBA Express loans and one Small Loan Advantage loan reported “other factors relating to the credit that in the lender’s opinion cannot be overcome except for the guaranty’ without specific identification of the factors.” Lack of Internal Controls Led to Lender Noncompliance, but Were Not Documented by SBA’s Reviews Based on our review of on-site review reports and an interview with one reviewer, the key factors underlying lenders’ high rate of noncompliance with the credit elsewhere documentation requirement were lenders’ lack of proper internal controls and procedures and lack of awareness of the credit elsewhere documentation requirement. In fiscal year 2016, SBA’s corrective actions related to the credit elsewhere requirement required the lenders to establish or strengthen their policies; procedures; underwriting processes; or internal controls. In addition, contractors conducting the on- site reviews with whom we spoke stated that some lenders appeared to be unfamiliar with SBA’s standard operating procedures or were unclear on how to interpret them. For the 11 on-site reviews conducted in 2016 that included corrective actions, SBA generally required lenders to improve controls or procedures. For example, one lender was required to “correct its policy, modify its procedure, and amend its internal controls to ensure that its consideration and documentation of credit unavailable elsewhere identifies the specific fact(s) which are applicable to the specific loan and the determination is rendered and accurate for each individual SBA loan that it originates.” Another lender was required to “improve underwriting processes and controls to ensure that the borrower meets the [credit elsewhere] requirement” and to “document the loan file with the reasons for the determination.” Similarly, for the six targeted reviews in 2016 that included corrective actions, SBA issued a general requirement for the lender to “identify the causes for the Findings and implement corrective actions.” Based on our review of these targeted reviews, lenders generally remedied or intended to remedy the issue by amending their internal controls or procedures. For example, one lender stated that the “Credit Elsewhere test will be incorporated into the Credit Department process.” Another lender stated that it would “centralize all SBA underwriting and has developed an SBA addendum that will be utilized for all SBA-guaranteed loans.” Although some of SBA’s on-site reviews for fiscal year 2016 identified factors leading to noncompliance, they generally did not document reviewers’ assessment of lenders’ policies and practices for complying with the credit elsewhere documentation requirement. SBA’s standard operating procedures state that the on-site reviewers should determine whether or not lenders’ policies and practices adhere to the requirement, but they do not require them to document their assessment of these policies and practices. Only 4 of the 40 fiscal year 2016 review reports that we examined included such an assessment. As a result, although SBA required corrective actions by the lender to address deficiencies, there usually was no record of the underlying factors that resulted in the lender’s noncompliance. Federal internal control standards state that management should design control activities to achieve objectives and respond to risks, including appropriate documentation of transactions and internal control. Because SBA does not require reviewers to document their assessment of lenders’ policies and practices for complying with the credit elsewhere documentation requirement, the agency does not have good information to help explain why so many lenders are not in compliance. This hinders SBA’s ability to take informed and effective actions to improve lender compliance with the requirement and ensure that the program is reaching its intended population. SBA Collects Limited Data on Criteria Used for Credit Elsewhere Justifications and Does Not Analyze Patterns in Lender Practices SBA Collects Limited Data on Criteria Used for Credit Elsewhere Justifications SBA does not routinely collect information on the criteria lenders use in their credit elsewhere justifications. As previously discussed, lenders are required to maintain documentation of borrower eligibility (including the credit elsewhere justification) in each loan file for loans approved through lenders’ delegated authority. However, SBA cannot readily aggregate information on lenders’ credit elsewhere justifications for both delegated and nondelegated loans: For delegated loans, lenders are required to certify the loan’s credit elsewhere eligibility on E-Tran, SBA’s online portal for origination of delegated and nondelegated loans. However, lenders are only required to check a box to certify that the loan file contains the required credit elsewhere justification and are not required to submit any additional information, including which of the criteria was used to make the determination. According to SBA officials, delegated loans account for loans approved by approximately 70 percent of lenders. For nondelegated loans, lenders are required to submit credit elsewhere documentation to be reviewed by SBA’s Loan Guaranty Processing Center. For these loans, which comprise loans approved by the remaining 30 percent of lenders, SBA might maintain paper records of data on borrowers’ eligibility but does not compile such data electronically and thus cannot readily aggregate the data for analysis. Instead, SBA relies on on-site reviews or lender-reported information to review lenders’ credit elsewhere justifications and collects limited data from these reviews. For its on-site reviews, SBA does not collect sample data on lenders’ use of the credit elsewhere criteria. For its off-site reviews, SBA collected sample data on lenders’ use of the credit elsewhere criteria based on 250 such reviews conducted in fiscal year 2016. For these reviews, SBA asked lenders to self-report a short description of the credit elsewhere justifications used for an SBA-selected sample of 10 loans. However, as discussed earlier, SBA did not request or examine loan files as part of these off-site reviews and did not follow up with lenders or review loan files to ensure the validity of the self- reported reasons. One reason why SBA does not routinely collect complete information on lenders’ use of the credit elsewhere criteria is that SBA’s loan origination system, E-Tran, is not equipped to record or tabulate this information. In addition, according to an SBA official, on-site reviews do not collect data on the credit elsewhere criteria because the loans reviewed are judgmentally selected and would not accurately represent the larger population. Federal internal control standards state that management should use quality information to achieve the entity’s objectives. To do so, management should identify the information needed to achieve the objectives and address the risks, obtain relevant data from reliable internal and external sources in a timely manner, and process the obtained data into quality information. More robust information on lenders’ credit elsewhere justifications, including the credit elsewhere criteria, would allow SBA to evaluate patterns in lender practices related to the credit elsewhere requirement and, in turn, help the agency ensure compliance with the requirement. In this context, generalizable data, which can be collected through random sampling, or complete data through required reporting for every loan would allow SBA to better understand patterns in lender practices across the 7(a) program. Further, nongeneralizable data, which are available through SBA’s current off- and on-site review processes, would allow SBA to examine specific groups of lenders and could help SBA determine if it is necessary to collect additional data. SBA Has Not Conducted Analysis to Determine If There Are Any Patterns of Noncompliance or Identified Lenders That May Be at Risk SBA does not analyze the limited data it collects to help it monitor lenders’ compliance with the credit elsewhere requirement. According to agency officials, SBA has not performed lender-level analyses of the criteria lenders use for their credit elsewhere justifications. Additionally, SBA has not analyzed 7(a) lenders’ use of the “other factors” criterion— that is, factors not specified in the other criteria that, in the lender’s opinion, cannot be overcome except for the guarantee—for example, by collecting data on the frequency of its use or examining why lenders rely on it. While some 7(a) lenders told us they avoided relying on the “other factors” criterion because it was vague and open to interpretation, some lenders have used it when a borrower’s profile did not meet any of the other criteria. For example, one lender stated that this criterion was used for a borrower who was no longer a start-up but had experienced fluctuations in cash flow due to relocation or change in ownership. Another lender stated that the criterion was used more frequently during the 2007-2009 financial recession to extend financing to borrowers whose owners had experienced a home foreclosure but were otherwise sound. Federal internal control standards state that management should establish and operate monitoring activities to monitor the internal control system and evaluate the results. Analyzing data on lenders’ use of the credit elsewhere criteria as part of its monitoring procedures could help SBA determine whether there are patterns in lender practices related to the criteria that could predict lender noncompliance. For example, SBA could analyze lenders’ use of the criteria along with lender review results and other data on loan characteristics and performance to determine whether certain patterns indicate that a lender might be applying the requirement inconsistently. Additionally, such analysis could inform SBA’s selection of which lenders to review by improving its ability to identify lenders at risk of noncompliance with the credit elsewhere requirement. Better selection criteria for its lender reviews could, in turn, improve identification and remediation of such noncompliance, helping ensure that the 7(a) program serves its target population. Lenders Generally View Credit Elsewhere Criteria as Adequate, and SBA Has Implemented New Procedures for Reviewing Eligibility Lenders Said Credit Elsewhere Criteria are Generally Adequate for Determining Borrower Eligibility Representatives at 8 of the 11 lenders that we contacted said they believed that SBA’s current credit elsewhere criteria are adequate in targeting small business borrowers who cannot obtain credit at reasonable terms. Representatives of these lenders also agreed that the criteria generally serve the types of small businesses that would otherwise have trouble obtaining conventional credit, such as new businesses or those with a shortage of collateral. One lender representative told us its most commonly used criterion related to the overall time in business because of the higher risk of failure. Another lender representative cited the lack of collateral as the most common criterion used. Additionally, representatives at an industry association told us that one of the most commonly used criteria was the one related to loan maturity and many small businesses seek 7(a) loans because they offer repayment terms of up to 10 years, compared to 1 to 3 years for conventional loans. Representatives of two other lenders suggested that the credit elsewhere criteria should not be overly prescriptive, which could limit lenders’ ability to make 7(a) loans to some businesses. For example, one representative said the credit elsewhere criteria should remain flexible because banks have different lending policies. In addition, representatives at three lenders indicated that they were hesitant to use the “other factors” criterion. One lender believed the requirement was open to interpretation and could be used inappropriately with lenders determining their own individual conventional loan policies. Another lender commented that the criterion was vague and rarely used by his institution, noting that SBA should provide some additional guidance on its use. Factors Such as Lender Policies and Economic Conditions Also Affect Lenders’ Decisions to Offer a SBA 7(a) Loan Lenders consider multiple factors in determining whether to offer small businesses a conventional loan or a 7(a) loan, according to stakeholders with whom we spoke. For example, representatives at an industry association stated that a bank goes through several analyses to determine what loan product to offer the borrower. These representatives stated that the credit elsewhere requirement is embedded in the analysis a bank performs, such as whether the borrower qualifies for a loan and has a financial need for an SBA loan and whether the 7(a) program is right for that borrower. Representatives at two other lenders also stated that many small businesses have already been turned down for conventional loans before they seek a 7(a) loan. One representative noted that the “reasonable rates and terms” component of the 7(a) program was important as it allows lenders to look more broadly at a borrower’s needs. For instance, the representative explained, lenders can assess whether repayment terms are reasonable given a particular borrower’s situation and the resources the borrower will have to repay the loan. Economic conditions also affect lending policies, including whether borrowers qualify for a conventional loan, according to representatives at seven lenders with whom we spoke. For example, during the recent economic downturn, banks tightened their underwriting standards for small businesses and were less willing to lend without a government guarantee, according to one lender representative. SBA Has Issued New Procedures for Reviewing Liquidity of Small Business Borrowers, and Additional Lender Training Is Underway SBA has issued revised primary operational guidance for the 7(a) program, effective January 1, 2018. As discussed previously, lenders are required to make a determination that the desired credit is not available to the applicant from nonfederal sources. Under the previous guidance, the lender had to determine that some or the entire loan was not available from nonfederal sources or the resources of the applicant business. However under the revised guidance, the scope of nonfederal sources a lender must review was further defined to include sources both related and unrelated to the applicant. The updated guidance states that lenders must consider: Nonfederal sources related to the applicant, including the liquidity of owners of 20 percent or more of the equity of the applicant, their spouses and minor children, and the applicant itself; or Nonfederal sources unrelated to the applicant, including conventional lenders or other sources of credit. Representatives of five lenders told us they have been determining how to interpret the new procedures with a few stating they would like additional guidance, including what information to retain in the file. Representatives of two lenders stated that there is some ambiguity in how to determine nonfederal resources and how to assess whether small business owners have too many available liquid resources to qualify for a 7(a) loan. One representative said that lenders can have different interpretations of what constitutes “available resources,” which is not specified in the new SOP. As a result, he said, there may be some confusion about how to assess family members of the borrower who have high net worths and whether the borrower should decline a family member contribution to qualify for an SBA loan. A representative of one lender stated that lenders will not know what SBA expects until loans are approved under the new procedures, default, and are then reviewed. Another lender’s representative suggested additional guidance on documentation, such as whether the bank must obtain a personal financial statement for each owner of the business. A SBA staff told us SBA has provided multiple training presentations to SBA staff, lenders, and trade associations on the statutory changes to the credit elsewhere requirements and standard operating procedure updates. These have included a presentation at a trade association conference, four monthly conference calls for SBA staff, and two conference calls for SBA lenders. SBA staff said SBA also plans to hold monthly training sessions with SBA field offices, quarterly training sessions with the industry, and at least four training sessions in 2018 at lender trade conferences. Additionally, a representative from an industry association told us it is providing industry training on SBA’s revised procedures, including the credit elsewhere liquidity requirement. Conclusions SBA’s 7(a) loan program is required to serve creditworthy small business borrowers who cannot obtain credit through a conventional lender at reasonable terms, and SBA largely relies on lenders with delegated authority to make credit elsewhere determinations. However, although there is a high rate of lender noncompliance with the credit elsewhere documentation requirement, SBA does not require its reviewers to document their assessment of the policies and procedures lenders use to meet the requirement. Without better information from lender reviews on how lenders are implementing the requirement to document their credit elsewhere decisions, SBA may be limited in its ability to promote compliance with requirements and, in turn, use such information to help ensure that 7(a) loans are reaching their target population. Furthermore, SBA does not routinely collect or analyze information on the criteria used for credit elsewhere justifications to evaluate patterns in lender practices. SBA recently began collecting some information on lenders’ use of the criteria, but this information is limited, and SBA does not analyze the information that it does collect to better understand lenders’ practices. Without more robust information and analysis, SBA may be limited in its ability to understand how lenders are using the credit elsewhere criteria and whether 7(a) loans are reaching borrowers who cannot obtain credit from other sources at reasonable terms. Recommendations for Executive Action We are making the following three recommendations to SBA. The Administrator of SBA should require reviewers to consistently document their assessments of a lender’s policies and practices. (Recommendation 1) The Administrator of SBA should use its on-site and off-site reviews to routinely collect information on lenders’ use of credit elsewhere criteria as part of its monitoring of lender practices related to the credit elsewhere requirement. (Recommendation 2) The Administrator of SBA should analyze information on lenders’ use of credit elsewhere criteria obtained from its reviews to identify lenders that may be at greater risk of noncompliance and to inform its selection of lenders for further review for credit elsewhere compliance. (Recommendation 3) Agency Comments and Our Evaluation We provided a draft of this report for review and comment. SBA’s written comments are reprinted in appendix IV. SBA generally agreed with the recommendations. SBA also provided additional comments on certain statements in the draft report, which are summarized below with our responses. SBA noted that the draft Highlights did not discuss how credit elsewhere is determined for nondelegated loans. We have not revised the Highlights in response to this comment because our review focused on delegated lenders. In the body of the report we note that approximately 70 percent of 7(a) loans are approved under delegated authority. We also refer to SBA’s nondelegated loans in the report for additional context. According to SBA, the statement on our draft Highlights did not fully reflect its monitoring of lender compliance. SBA identified a variety of reviews it uses in addition to on-site reviews by third party contractors, which we discuss in the body of the report. We have modified the Highlights to reflect these other reviews. Also in reference to the draft Highlights, SBA stated that it provides oversight on every on-site lender review and that an SBA employee is present as a subject-matter expert on every review. We revised the Highlights by adding that SBA provides oversight to the on-site reviews conducted by third-party contractors. In response to a statement in our draft report that SBA guarantees loans to small businesses for working capital and other general business purposes, SBA commented that working capital generally is not the primary purpose for SBA-guaranteed loans. We did not revise the statement because SBA’s SOP 50 10 5 (version J) specifies that SBA’s 7(a) loan proceeds may be used for permanent working capital and revolving working capital, among other things. In relation to a footnote in our report that mentions two lender reviews for which we did not receive documentation, SBA stated that on February 15, 2018, it provided documentation to us related to the reviews and that we had confirmed its receipt. However, the text in the footnote in question refers to two targeted lender reviews from 2016 that included corrective actions. The information SBA provided to us on February 15, 2018, was related to on-site reviews conducted in 2016. As a result, we did not revise the footnote. SBA’s letter also contained technical comments that we incorporated as appropriate. We are sending copies of this report to congressional committees, agencies, and other interested parties. In addition, this report will be available at no charge on our website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-8678 or shearw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology This report discusses (1) 7(a) lending to selected categories of small business borrowers from fiscal years 2007 through 2016; (2) how the Small Business Administration (SBA) monitors lenders’ compliance with the credit elsewhere requirement; (3) the extent to which SBA evaluates trends in lender practices related to the credit elsewhere requirement; and (4) lenders’ views on the criteria used to determine eligibility for 7(a) loans and other issues related to the 7(a) program. For background on the 7(a) program and the credit elsewhere requirement, we reviewed the legislative history of the 7(a) program and our previous reports. We also interviewed officials from SBA’s Office of Credit Risk Management on guidance provided to 7(a) lenders. For background on constraints in the small business credit market, we reviewed recent academic literature on the characteristics of small businesses that historically have had more difficulty accessing credit. In addition, we reviewed recent studies published by the Federal Reserve Banks of Atlanta; Cleveland; Kansas City; and New York. To describe the population of borrowers served by the 7(a) program, we selected characteristics (such as gender, minority status, and percentage of new business) that we used in our 2007 report and that were the subject of the recent studies by Federal Reserve Banks. We obtained and analyzed SBA loan-level data to describe 7(a) loans and borrowers. Specifically, SBA provided us with 581,393 records from its administrative data systems, which contained information on all loans approved and disbursed in fiscal years 2007 through 2016. The SBA data included various types of information describing each loan, including the total gross approval amount; the amount guaranteed by SBA; the loan term; and the interest rate; delivery method; and status of the loan. The SBA data also included information on borrower characteristics: Age of business. Firms were classified as new (less than 2 years in operation) or existing. Gender. Firms were classified as 100 percent male-owned; 50 percent or greater women owned; 50 percent or less women-owned; or “unknown.” Information on gender was voluntarily provided by borrowers. Economically distressed area. We identified borrowers in economically distressed areas by matching borrower zip codes provided by SBA to those in the 2011 through 2015 American Community Survey. We defined distressed areas as zip codes where at least 20 percent of households had incomes below the national poverty line. In about 1 percent of the cases, we were unable to classify a lender because a zip code had changed or had insufficient population to report a poverty rate. We consider 1 percent of unmatched cases to be low by data reliability standards. Race/ethnicity. Borrowers were placed in one of nine categories of race/ethnicity, including an “unknown” category. We aggregated these to create minority, nonminority, and undetermined categories. The minority category included all borrowers who reported being a race/ethnicity other than white. The nonminority category included borrowers who reported being white. Information on race was voluntarily provided by borrowers. Industry. Firms were assigned a North American Industrial Classification code. These six-digit codes begin with a two-digit sector code that we used to draw more general conclusions about industries. Geographic information. The data provided the state where the borrower is located. In addition, we obtained information from SBA on loan- and lender-level Small Business Risk Portfolio Solution scores (predictive scores) provided by Dunn & Bradstreet and Fair Isaac Company, for loans approved in fiscal year 2016, the latest available. We were able to obtain predictive scores for approximately 81 percent of the loans for which SBA had provided other information. According to SBA, some loans may not have been disbursed at the time we obtained the predictive scores and, as a result, we do not have scores associated with these loans. We analyzed the information to determine the range of predictive scores and the range of average predictive scores by lender. To assess the reliability of loan-level data on borrower and loan characteristics and predictive scores we received from SBA, we interviewed agency officials knowledgeable about the data and reviewed related documentation. We also conducted electronic testing, including checks for outliers, missing data, and erroneous values. We determined that the data were sufficiently reliable for the purposes of describing the characteristics of borrowers who received 7(a) loans and the distribution of predictive scores. To assess how SBA monitors lenders’ compliance with the credit elsewhere requirement and criteria, we reviewed SBA’s standard operating procedures and other guidance on 7(a) program regulations and lender oversight. Specifically, we reviewed SOP 50 10 5 (versions I and J) on Lender and Development Company Loan Programs, SOP 50 53(A) on Lender Supervision and Enforcement, and SOP 51 00, On-Site Lender Reviews/Examinations, as well as information and policy notices related to the credit elsewhere requirement. Additionally, we interviewed representatives including those at SBA’s Office of Capital Access and Office of Credit Risk Management on lender oversight and lender review processes. We reviewed all the on-site lender review reports (40 reviews), including corrective actions or requirements related to the credit elsewhere requirement (documentation for 11 lenders), and targeted review reports that had credit elsewhere findings (7 reviews) that SBA conducted in fiscal year 2016. We also interviewed officials and reviewed recent reports from SBA’s Office of Inspector General. To assess the extent to which SBA evaluates trends in lender practices related to the credit elsewhere requirement, we interviewed SBA officials and reviewed documentation for SBA’s online portal for loan origination. We also incorporated information from interviews with a nongeneralizable, nonrepresentative sample of 7(a) lenders, which we discuss below. To obtain lenders’ views on the criteria used to determine eligibility for 7(a) loans and other program-related issues, we interviewed SBA staff including from the Office of Capital Access, and representatives of the National Association of Government Guaranteed Lenders; American Bankers Association; Independent Community Bankers Association; and National Federation of Independent Businesses. We also interviewed 11 banks (one bank provided written responses) in order to obtain the lender perspective of credit elsewhere. Nine of the banks were selected by us using a random process that concentrated on larger lenders. These nine lenders selected by us represent about 13 percent of the loans approved and 16 percent of the dollars approved in 2016. In addition, we interviewed two additional banks that represented an industry group – one larger bank and one small bank. Although we partially selected at random, the lenders we interviewed should not be considered generalizable because of the small number. We conducted this performance audit from August 2017 to June 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Selected Characteristics of 7(a) Lending, Fiscal Years 2007–2016 In this appendix, we provide information on the total amount and number of approved 7(a) loans and the top eight industry sectors receiving 7(a) loans. Data are also presented on fiscal year 2016 loan volume by state and per capita. As shown in figure 6 below, the total amount of approved 7(a) loans decreased during the period associated with the Great Recession (2007 through 2009). From fiscal year 2009 on, the total amount of approved 7(a) loans increased until a decline in fiscal year 2012. During this timeframe, the American Recovery and Reinvestment Act of 2009 and the Small Business Jobs Act of 2010 provided fee relief and higher guaranties. The Small Business Jobs Act of 2010 also provided a temporary increase in Small Business Administration (SBA) Express loan limits to $1 million (instead of $350,000). These programs have since expired. 7(a) Loans by North American Industry Classification System (NAICS) code. Table 1 shows the largest eight industrial sectors by proportion of the total amount of 7(a) loans approved, using the NAICS code. The combined share of the top eight sectors declined slightly from 85 percent to 80 percent of the total lending from fiscal years 2007 through 2016, with an average of 82 percent. During this period, the Accommodation and Food Services sector had the largest average share of total loan amount at 17 percent, followed by the Retail Trade sector at 15 percent. Approved loan amount and per capita dollars by state. As shown in figure 7, California; Texas; Florida; Georgia; and New York received the highest total of approved loan dollars in fiscal year 2016. The average approval amount across all loans was $380,619. Georgia and Arkansas had the largest average approval amount in 2016. Also, during this period, Utah; Colorado; Georgia; California; and Washington received the highest per capita approved loan dollars. Appendix III: Information on Borrower Characteristics Based on SBA’s Predictive Scores In fiscal-year 2016, creditworthiness varied widely among 7(a) program borrowers. We analyzed creditworthiness using the Small Business Administration’s (SBA) Small Business Risk Portfolio Solution score (predictive score), which ranges from 70 to 300, with 300 indicating the least risky loan. According to SBA, loans with scores above 180 are considered “lower risk,” scores between 140 and 179 are considered “moderate risk,” and scores 139 and lower are considered “higher risk.” There did not appear be differences in score based on the gender of the borrower or the age of the business. While SBA relies on the Predictive Score data to identify lenders that may pose excessive risk to the SBA 7(a) portfolio, the data also provide potential insights related to lender implementation of the credit elsewhere requirement. Variation. We found that some 7(a) borrowers were much more creditworthy than others. In 2016, the only year for which we obtained data, the predictive score at origination varied widely among borrowers. In 2016, the scores of borrowers ranged from a low of 91 to a high of 246. However, most scores were between 171 and 203, and the median score was 188. Race/ethnicity. We found that there were slight differences in creditworthiness by race/ethnicity, with median scores ranging from 180 to 189 depending on the category. Specifically, loans to African Americans in 2016 had a median score of 180, and loans to Hispanics had a median score of 183. In contrast, loans to whites had a median score of 188, and loans to Asian and Pacific Islanders had a median score of 189. Lender size. We found that lenders with larger numbers of SBA loans tended to have slightly more creditworthy borrowers. The top 5 percent of lenders had a median average score of 187, whereas the bottom 75 percent of lenders had a median average score of 182.5. Among the top 5 percent of lenders (with 374 loans per lender on average, collectively representing about 70 percent of the loans approved), the average score ranged from 171 to 195. Among all lenders, the average score ranged from 116 to 233. However, because many lenders only approved one or two loans in 2016, the average may reflect very few borrowers for that lender, making it difficult to tell whether the scores reflect a real difference between lenders. Appendix IV: Comments from the Small Business Administration Appendix V: GAO Contact and Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Harry Medina (Assistant Director); Janet Fong (Analyst in Charge); Benjamin A. Bolitzer; Gita DeVaney; David S. Dornisch; Amanda D. Gallear (intern); Marc W. Molino; Jennifer W. Schwartz; and Tyler L. Spunaugle made key contributions to this report.
Why GAO Did This Study SBA's 7(a) program is required to serve creditworthy small business borrowers who cannot obtain credit through a conventional lender at reasonable terms. The Joint Explanatory Statement of the Consolidated Appropriations Act, 2017 includes a provision for GAO to review the 7(a) program. This report discusses, among other things, (1) how SBA monitors lenders' compliance with the credit elsewhere requirement, (2) the extent to which SBA evaluates trends in lender credit elsewhere practices, and (3) lenders' views on the credit elsewhere criteria for 7(a) loans. GAO analyzed SBA data on 7(a) loans approved for fiscal years 2007–2016, the latest available, and reviewed literature on small business lending; reviewed standard operating procedures, other guidance, and findings from SBA reviews performed in fiscal year 2016; and interviewed lender associations and a nonrepresentative sample of 7(a) lenders selected that concentrated on larger lenders. What GAO Found For its 7(a) loan program, the Small Business Administration (SBA) has largely delegated authority to lenders to make 7(a) loan determinations for those borrowers who cannot obtain conventional credit at reasonable terms elsewhere. To monitor lender compliance with the “credit elsewhere” requirement SBA primarily uses on-site reviews conducted by third-party contractors with SBA participation and oversight, and other reviews. According to SBA guidance, lenders making 7(a) loans must take steps to ensure and document that borrowers meet the program's credit elsewhere requirement. However, GAO noted a number of concerns with SBA's monitoring efforts. Specifically, GAO found the following: Over 40 percent (17 of 40) of the on-site lender reviews performed in fiscal year 2016 identified lender noncompliance with the requirement. On-site reviewers identified several factors, such as weakness in lenders' internal control processes that were the cause for lender noncompliance. Most on-site reviewers did not document their assessment of lenders' policies or procedures, because SBA does not require them to do so. As a result SBA does not have information that could help explain the high noncompliance rate. Federal internal control standards state that management should design control activities, including appropriate documentation, and use quality information to achieve the entity's objectives. Without better information on lenders' procedures for complying with the documentation requirement, SBA may be limited in its ability to promote compliance with requirements designed to help ensure that the 7(a) program reaches its target population. SBA does not routinely collect or analyze information on the criteria used by lenders for credit elsewhere justifications. SBA recently began collecting some information on lenders' use of the criteria, but this information is limited, and SBA does not analyze the information that it does collect to better understand lenders' practices. Federal internal control standards state that management should use quality information to achieve the entity's objectives. Without more robust information and analysis, SBA may be limited in its ability to understand how lenders are using the credit elsewhere criteria and identify patterns of use by certain lenders that place them at a higher risk of not reaching borrowers who cannot obtain credit from other sources at reasonable terms. In general, representatives from 8 of 11 lenders that GAO interviewed stated that SBA's credit elsewhere criteria are adequate for determining small business eligibility for the 7(a) program. These criteria help them target their lending to small businesses that would otherwise have difficulty obtaining conventional credit because they are often new businesses or have a shortage of collateral. However, they also said that other factors—such as lender policies and economic conditions—can affect their decisions to offer 7(a) loans. In January 2018, SBA issued revised guidance for the 7(a) program and has provided training on this new guidance to lenders and trade associations. Lenders told GAO they are still in the process of understanding the new requirements. What GAO Recommends GAO recommends that SBA (1) require its on-site reviewers to document their assessment of lenders' policies and procedures related to the credit elsewhere documentation requirement, (2) collect information on lenders' use of credit elsewhere criteria, and (3) analyze that information to identify trends. SBA generally agreed with the recommendations.
gao_GAO-19-81
gao_GAO-19-81_0
Background U.S. taxpayers who earn income abroad may be subject to U.S. taxes on that income. Firms incorporated in the United States can earn income from their own foreign activities or through their ownership of foreign subsidiaries. In such cases, income is subject to tax in both the country where it was earned and in the United States. In this report, we focus on U.S. corporations with operations in foreign countries. Countries have generally adopted one of two alternative approaches to taxing corporations’ foreign income. Prior to the enactment of Public Law 115-97—commonly referred to by the President and many administrative documents as the Tax Cuts and Jobs Act of 2017 (TCJA)—the U.S. government taxed U.S. corporations largely on a worldwide basis, meaning that the United States taxed both the domestic and foreign earned income of corporations. Most other countries, including most OECD member countries, use a largely territorial approach that taxes income earned within their borders, and exempts certain foreign-earned income of their resident corporations from taxation. However, under both a worldwide and a territorial system, income earned by foreign entities from operations within a country is taxed by that country. As such, the corporation or its subsidiary must file a tax return in that country, and the country’s tax authority can audit the tax return and adjust taxable income and taxes due. Countries have adopted measures to limit the potential for double taxation, which occurs when two or more countries levy taxes on the same income due to differences in the tax jurisdictions and tax systems. To avoid double taxation, countries—including the United States—that tax on a worldwide basis provide a credit for foreign taxes paid that reduces the MNC’s domestic tax liability. In addition, countries maintain tax treaties with each other that cover a wide range of tax issues but have two primary purposes: (1) avoiding double taxation, and (2) preventing tax evasion. Despite these efforts to limit disputes, a U.S. MNC may disagree with an adjustment made to its taxable income. In such cases, an MNC can go directly to the country’s tax authority to try to resolve the dispute. According to tax experts we spoke with, if, however, a U.S. MNC views this process as unlikely to be successful or if it was unsuccessful and believes the adjustment would result in double taxation, the corporation can ask USCA for assistance in resolving the dispute. In the United States, the designated USCA is the commissioner of the Large Business and International Division of the IRS. The USCA office is made up of two groups: the Advance Pricing and Mutual Agreement Program (APMA) and the Treaty Assistance and Interpretation Team. According to USCA officials, most disputes involving U.S. MNCs—the focus of this report—are resolved through APMA. TJCA significantly changed the way in which the United States taxes MNC’s income but some experts have pointed out that the law is unlikely to end profit shifting. The Congressional Budget Office estimated in April 2018 that TCJA would reduce profit shifting by about $65 billion per year out of an estimated $300 billion of profit shifting per year prior to the act. For U.S. corporations earning income directly through foreign subsidiaries, the act moved the United States from a system that generally taxed worldwide income and provided a credit for taxes paid abroad to a system that generally does not tax foreign-sourced income. However, the new ‘territorial’ system created by the act included a number of provisions designed to protect the United States’ corporate tax base by taxing some foreign income. It included (1) a lower worldwide tax on global intangible low-taxed income, and (2) a corresponding tax on intangible income earned abroad based on assets in the United States (foreign-derived intangible income). The act also added a corporate tax base erosion and antiabuse tax. It is not clear how these provisions will affect corporations’ allocation of profits and business activity. MAP Has Multiple Stages and Potential Resolution Paths The process of resolving a dispute through MAP usually begins when a U.S. MNC requests assistance from USCA to resolve disputes over an adjustment in either its foreign-filed or its U.S. tax return. According to IRS, the number of active MAP cases, as of October 2017, was 686 and covered $26 billion of income subject to potential double taxation. It should be noted that a single U.S. taxpayer can be involved in multiple MAP cases because disputes are resolved bi-laterally. For example, if a U.S. MNC had a dispute involving the allocation of overhead costs across multiple subsidiaries in different countries, then there would be separate dispute cases for each country involved. According to IRS data, the number of MAP cases filed each year has been growing, more than doubling in 5 years from 100 in 2010 to 286 in 2014. As noted earlier, when a U.S. MNC disputes a foreign tax authority’s adjustment to a tax return, the U.S. MNC can try to resolve the issue through the appeals process within the taxing jurisdiction. However, according to tax experts we spoke with, if the U.S. MNC is unsuccessful or if the U.S. MNC believes the local appeal will be less successful than the MAP process, it can request assistance from USCA. Once a taxpayer has requested assistance through MAP, USCA conducts an initial review to determine if it will accept the request. For example, USCA analysts would ensure that the request involves potential double taxation and that the foreign country was a treaty partner. If USCA accepts the MAP request for assistance, it reviews the technical facts of the dispute and prepares its position prior to negotiating on a resolution with the foreign competent authority. When IRS, rather than the foreign tax authority, initiates the adjustment, USCA will discuss the facts of the case with the IRS examiner who proposed the adjustment, but determines on its own how much of the adjustment is justified. In the case of foreign-initiated adjustments, USCA will contact the foreign competent authority while developing its position to provide updates and obtain any needed information. According to USCA officials, based on its review, the USCA determines whether it considers the adjustment valid and the amount of the adjustment that should be withdrawn by the initiating tax authority, and what amount of relief USCA may provide. USCA can also unilaterally decide to fully withdraw the IRS adjustment or provide full correlative relief for a foreign-initiated adjustment that USCA considers valid. USCA resolves disputes brought to it by MNCs according to MAP specified in the tax treaties. Under the treaties, international tax disputes that may result in double taxation can be resolved in the following five ways: The country that initiated the adjustment to taxable income can fully withdraw the adjustment, leaving the taxpayer’s reportable taxable income unchanged. USCA can provide correlative relief to the MNC. This relief usually takes the form of a corresponding adjustment, which relieves double taxation caused by the other country’s adjustment. USCA and the foreign country can agree to a combination of withdrawing some of the adjustment to taxable income and providing relief for the remaining adjustment to provide full relief of double taxation to the taxpayer. USCA and the foreign country can agree on some combination of withdrawal and relief that results in partial relief to the taxpayer. No relief from adjustment. Figure 1 provides an overview of the basic process of a MAP request for assistance. Appendix III provides illustrative examples of dispute resolution cases and resolutions. Once USCA has determined its position, it begins negotiating with the foreign competent authority to resolve the dispute. These cases can take several years to resolve with some taking much longer than the average, particularly if there is a fundamental disagreement. For example, USCA’s APMA inventory data from 2013 to 2017 indicate the average processing time was around 2 years, but cases ranged from as little as a few months to 5 years to resolve, with a few cases taking even longer. In addition, the inventory data show that disputes are generally over taxable income from prior years. For example, a MAP case resolved in 2017 could have been filed in 2008 for a dispute over 2005 taxable income. However, cases may be shorter when the tax treaties include provisions for binding arbitration. The United States has treaties with four counties that include provisions for binding arbitration. If the two countries are unable to resolve the dispute within 2 years, the taxpayer can request that the case go to arbitration for a decision. Throughout the entire process, the taxpayer has a right to withdraw the request and accept the tax authority’s adjustment which may entail double taxation. According to tax experts that we interviewed, if the adjustment is small, a taxpayer may prefer to accept the double taxation rather than incur the cost of going through the MAP process. These costs can include direct costs of retaining tax advisors as well as the indirect costs of listing the amount of funds that are in dispute on their financial statement as an unresolved tax issue. The taxpayer can also refuse the negotiated or arbitrated resolution and appeal the case to the IRS office of appeals or foreign tax authority. Available Information about MAP is Limited and Highly Technical USCA Provides Information Needed for Requesting MAP Assistance, but the Information has Limited Accessibility USCA provides information about the MAP process through an IRS web page on competent authority assistance. The webpage includes contact information for USCA offices and a link to a document that describes the process for requesting assistance. The document is in the form of a Revenue Procedure—an official statement of a procedure based on the Internal Revenue Code, related statutes, tax treaties, and regulations. Our analysis of the information on the website found a number of issues that limit its accessibility: The website does not include an overview or high-level description of the MAP process. The website lacked elements such as frequently asked questions or fact sheets that IRS has developed for similar processes that help promote understanding of complex tax issues. The website does not explain in clear language what constitutes a tax dispute eligible for the MAP resolution process. Other IRS websites provide more detailed information for other issues relevant to U.S. MNCs. For example, the IRS website for country-by-country reporting provides a detailed page explaining the new reporting guidance with multiple links for additional guidance. In addition, USCA’s guidance for requesting MAP assistance is an 87- page revenue procedure. While this document is complete, it is highly technical and may not be easily understood by taxpayers seeking relief from double taxation. IRS requires information for taxpayers to be clear and accessible. IRS’s Taxpayer Bill of Rights states that taxpayers have the right to clear explanations of tax laws and IRS procedures. In addition, the federal internal control standards, the Plain Writing Act of 2010, and Office of Management and Budget plain writing guidance state that agencies should, for example, communicate the necessary quality information externally. Moreover, accessibility is consistent with the criteria we have previously identified for a good tax system. IRS’s Strategic Plan for Fiscal Years 2018-2022 notes that the agency faces a business environment that is becoming more global, dynamic, and digital, further underscoring the importance of taxpayers having accessible, plain language guidance on MAP. The Organisation for Economic Co-operation and Development (OECD) also assessed the accessibility of USCA’s guidance and found that it met OECD’s minimum standards. As part of its base erosion and profit-shifting project, the OECD has been reviewing countries’ administrations of the mutual agreement processes. In its review of the United States’ process, the OECD concluded that while U.S. MAP guidance is comprehensive and available, and fully met the OECD’s minimum standards, some further clarity could be provided. The OECD review offered examples of how other countries provide taxpayers with overview information they can use before accessing more detailed technical guidance. For example, Canada publishes an annual MAP Program Report on its website that includes background information on its process, as well as general information on the steps in the process and high-level information on timeframes. Singapore’s MAP web page includes basic information on the MAP process, an example of a case that would be suitable for MAP, and a link for users to provide feedback on the usefulness of the information. USCA officials said that they have not improved the information provided on their website because they believe the current guidance to be sufficient. However, USCA officials told us that they are engaged in some efforts that may improve the information they provide to taxpayers. USCA officials stated that USCA is close to finalizing a “practice unit” explaining the competent authority process. According to USCA officials, this unit uses plain language to walk taxpayers step by step through MAP and the competent authority process. The unit also highlights the roles and responsibilities of all the stakeholders in the process, including the taxpayers. USCA officials said they intend to make the practice unit available on USCA’s public website and the United States’ OECD MAP Profile. APMA officials also said they expect that the additional information on the requirements of MAP and Revenue Procedure 2015-40 will be useful to those unfamiliar with the processes. USCA officials did not provide a date for when this practice unit would be completed. Providing taxpayers with a clear overview and accessible guidance on the MAP process would help ensure that taxpayers who might benefit from entering the MAP process are aware of the process, know how to navigate it, and understand the general time frames for relief. Providing information that helps facilitate this process could help reduce taxpayer burden. USCA Does Not Document Contacts with Taxpayers USCA may contact taxpayers about their cases for various reasons. Officials in the APMA office stated that they send acknowledgement letters when the MAP request is accepted, and routinely gather additional information from taxpayers to fully develop a MAP case. They said that an analyst generally will communicate with a taxpayer before and after APMA has substantive discussions with its foreign counterparts regarding the taxpayer’s case. While officials stated they provide regular contact, they do not have a process to systematically record or track these contacts, other than in the case file. Regular contact with taxpayers may help make the process more transparent and help ensure that they are informed about their cases. One of the criteria we have previously identified for a good tax system is transparency. A transparent tax system reduces uncertainty for taxpayers, allowing them to better plan their decisions about employment and investment. According to IRS officials, APMA provides general guidance on when a taxpayer should be notified of developments in the case or its status. APMA officials stated that contact will vary depending on the facts and circumstances of the case such as its complexity and frequency of communications with the foreign competent authority. However, the guidance is focused on taxpayer expectations and does not address any requirements of officials to track or record contacts. Contacts with taxpayers could affect perceptions of the transparency and fairness of the MAP process. Tracking and recording contact with taxpayers would help provide APMA with assurance that taxpayers are being kept aware of the status of their MAP case in a timely manner. Monitoring such information would help APMA to evaluate the transparency and fairness of its MAP administration. It would also help assure APMA there is consistency in contacting taxpayers. USCA Does Not Track Key Data nor Use Existing Data to Assess Management of MAP Cases USCA Does Not Track Hours Worked or Key Milestones for MAP Cases APMA maintains an inventory database that tracks some information on MAP cases. These data include how many months it took to resolve the case, the analyst assigned to the case, and whether an economist was assigned. According to APMA officials, each MAP case is assigned an analyst and, for complex cases, an economist. APMA groups analysts into teams that work on MAP cases from different geographic regions. Three teams consist of economists that are assigned to cases managed by other teams. APMA data on how staff are deployed are shown in table 1. While these data provide some information on workload, they do not provide information on how many hours or staff days are associated with a particular case. This information would be useful to know because it could provide insight about the resources needed for different cases based on differences in complexity and other factors. Standards for internal control state that management should establish and operate monitoring activities that can be used to evaluate results and ensure that objectives are met with minimum wasted resources. However, according to APMA officials, their tracking system is not set up to track hours or staff days spent on each case. Instead, according to APMA officials, their staffing process accounts for differences in complexity in other ways. Officials explained that when APMA receives a MAP request, it ranks the request according to complexity using a scale that runs from 1 to 5. The more complex cases, those ranked 3 or higher, are assigned an economist which can increase the cost of working the more complex cases. APMA Does Not Have Controls to Ensure the Quality of its Case Data In our review of a generalizable sample of MAP case files we found a number of inconsistencies between the amount of adjustment recorded in APMA’s inventory database, the amount recorded in the original MAP request, and the amount recorded in the resolution letter provided to taxpayers and the foreign competent authority. We also found inconsistencies between the request letter and the resolution letter amounts. On the basis of our sample, we estimate that about 30 percent of the entries in the inventory database had these types of discrepancies. The cause of some of these discrepancies was relatively easy to identify and correct, such as transcription errors, which could have been detected if APMA had a more robust inventory management system in place. Other inconsistencies in the data were more difficult to resolve. According to IRS officials, some discrepancies could be explained by changes in exchange rates over time. However other inconsistencies could be not be as easily explained. These inconsistencies exist because APMA does not have controls in place to systematically and routinely evaluate the quality of the data in its inventory of cases. As a result, the accuracy of program measures that USCA might develop based on these data may be uncertain. Having controls in place to ensure the accuracy of data in the inventory database would also help APMA meet OECD’s minimum standards. The OECD has called for countries to provide MAP case statistics by country and published these statistics for the first time in 2018. According to APMA officials, APMA is currently working on implementing an upgraded inventory management system that should help APMA meet this goal. Development and full implementation of this project has been underway for 4 years. APMA Does Not Analyze Currently Available Data to Inform Its Operations and Management Decisions APMA’s inventory data-base includes data on both pending and resolved MAP cases that can help management monitor program operations and potentially identify areas to improve the management of MAP cases. However, APMA does not systematically analyze data to identify areas for improvement. For example, analysis of trends and comparisons of certain case characteristics—such as the country initiating the adjustment, the elapsed time on the case, whether an economist was assigned to the case, and the negotiated outcome—can help to identify how these characteristics may be related. According to APMA officials they do not undertake this kind of data analysis because they use the data as needed to manage current resources and to achieve their primary goal of satisfying the OECD’s minimum standards. These minimum standards include such goals as countries ensuring that adequate resources are provided to the MAP function and ensuring that both competent authorities should be made aware of MAP requests and given an opportunity to share their views on whether the request should be accepted. According to federal internal control standards, management should design information systems to provide information to meet the entity’s objectives and respond to risks. Information and analysis that helps APMA understand changes in international environment and complexity of U.S. MNCs would better enable it to identify future resource needs by evaluating trends in case characteristics. In the absence of quantifiable analysis conducted by APMA, we used information from its existing inventory data to illustrate the types of analysis that may be possible. For example, figure 2 shows that the volume of cases can vary greatly by country over time. The figure shows that the number of cases resulting from an adjustment by IRS ranged from a low of 22 in 2015 to high of 85 in 2017. Conducting similar analysis of trends in volume may help APMA better plan for allocating its limited resources to different teams in anticipation of increased case volume. In addition, because APMA allocates staff across teams that focus on particular countries, tracking trends in case load by country could help USCA prepare to anticipate spikes in cases and allocate resources more effectively across country teams. By conducting regular trend analyses, APMA could also identify areas for further analysis to determine what may be driving variations in case load by country. Similarly, figure 3 shows our analysis of the average time to resolve a case. Average case time ranged between 15 and 40 months, with the average case time exceeding the OECD-recommended 24-month period for a number of countries and years. By conducting similar analysis of the trends and differences in processing time across MAP cases, APMA would be better able to identify areas meriting additional review for ways to improve timeliness. We also used inventory data to analyze outcomes in terms of the determinations reached through MAP negotiations. One analysis included an examination of the share of cases in which the United States provided some relief to the taxpayer. As can be seen in figure 4, most foreign cases in most years resulted in relief being shared between the two countries involved in a dispute. As shown in figure 4, in 2017, approximately two-thirds of all foreign cases were resolved with both countries providing some relief compared to less than 10 percent of U.S. cases. However, as shown in figure 5, USCA in most years fully withdrew a large percentage of adjustments made by IRS. In 2017, 74 percent of IRS adjustments were withdrawn. The data show that U.S.-initiated cases were more often resolved entirely by the United States than with the foreign country providing some of the relief. However, these data on case resolutions need to be interpreted with caution. For example, as pointed out by IRS officials, a measure like the percent withdrawn may be misinterpreted if it concerns a small number of large MNCs with operations in many countries, and the adjustments are small unless this information is provided as context. Nonetheless, the case resolution data can be useful for guiding further analysis by helping to identify areas that would merit further analysis of the reasons for withdrawing cases or the reasons IRS examiners are making adjustments that are not upheld by USCA. Analyzing trends in outcomes would help to ensure that APMA is not missing opportunities to protect the U.S. corporate tax base and that IRS examiners are cognizant of tax treaty treatment of foreign source income of U.S. MNCs. Additional examples of MAP case data analysis are provided in appendix IV. While APMA must work all MAP cases, developing quality data on MAP cases would help to ensure effective management of the program. Analyzing trends in case data could help identify and manage evolving demands and priorities—such as the challenges present in a changing global tax environment. According to federal internal control standards, as a part of management controls, management should design information systems to obtain and process information to meet operational needs. Because APMA cannot alter its workload, it is all the more important to effectively manage staff and time. Reliable information systems are essential for effective management. Without assessing APMAs’ current and past performance, APMA may be less able to identify areas for improvement. Conducting analysis and improving the quality of data could help inform APMA’s allocation of resources and inform other parts of the agency concerning international tax issues. For example, IRS exams may be better able to judge the appropriateness of its tax adjustments when it is informed about how USCA has viewed similar adjustments governed by tax treaties. APMA Does Not Record the Disputed Tax Issues in Its Inventory Database, Which Limits the Usefulness of Data The APMA inventory database contains select characteristics of resolved cases, such as the time it took to resolve the case and the country that initiated the adjustment in dispute. However, it does not contain information on the tax issue that was in dispute. Without tracking the tax issue in dispute, APMA is unable to analyze trends in tax issues which could be used to determine if there are systemic issues that could be solved through means such as changes in IRS regulations, treaty, or statute. USCA officials told us that there are additional costs to tracking tax issues and that defining the type of tax issue involved in complex international tax cases could be difficult. However, IRS tracks issues in other similar areas. For example, IRS’s Office of Appeals, which handles a wide range of tax controversies covering both international and domestic issues, tracks the tax issue in dispute. Furthermore, APMA includes categories of tax transactions in its annual statutory reports. The categories are used in Advanced Pricing Agreements (APA) to distinguish between a U.S. entity and non-U.S. entity, and to determine whether a transaction covered by an agreement involved the sale of tangible property, use of intangible property or the provision of services. APAs are agreements between IRS and MNCs on how transactions among related entities of the MNC should be priced. APAs can prevent potential disputes by having agreement on the transaction prior to filing a tax return with IRS. These categories or alternative categories that APMA has already developed could be added to the inventory database to provide additional information on the tax issue in dispute. To illustrate how the additional information on tax issues can help inform management decisions, we categorized the tax issues in our sample of MAP cases using APA categories. As shown in figures 6 and 7, we compared the estimated percentage of certain tax issues in all MAP cases between 2015 and 2017 with those in APA cases in 2014. We also compared tax issues with other characteristics of the MAP cases. As figure 6 shows, an estimated 37 percent of MAP cases involved disputes over a tax adjustment related to services provided by a non-US entity such as a foreign corporation. Figure 6 also shows that disputes concerning the provision of services (both U.S. and non-U.S.) are estimated to account for 61 percent of cases, which far exceeded disputes over the use of intangible property, at 17 percent or the sale of tangible property at 15 percent. Conducting similar reviews of this type of information could help APMA better match its resources in terms of experience with different types of tax issues. We also compared tax issues identified in MAP cases with the transactions covered in APAs. The results illustrate how tracking tax issues could be useful for improving the administration of both programs. For example, as shown in figure 7, 23 percent of APA transactions covered sales of tangible property into the United States in 2014. Our categorization of MAP cases reported in figure 6 shows sales of tangible property into the United States as a disputed issue in only an estimated 8 percent of those cases. This difference in relative frequencies may suggest a connection between the programs, as tax practitioners have suggested increasing the use of APAs as a way of reducing international tax disputes. However, some of the differences in percentages between figure 6 and 7 could arise from differences in years covered and in categorization of tax issues. We also categorized the information to illustrate how tracking tax issues and other characteristics, such as location and the outcomes of the dispute resolution process could help with administration. For example, as shown in table 2, the tax issue with the largest estimated share of foreign MAP cases (67 percent) involved the provision of services. U.S. MAP cases, in contrast, were spread more evenly across tax issues, with no single tax category having an estimated share greater than 50 percent. Conducting a similar review of this type of information could help APMA match its resource allocations in terms of staff experience with different types of tax issues within its country-focused teams. Additionally, table 3 shows when we tracked outcomes of the dispute resolution process, we found that an estimated 69 percent of cases resolved by a combination of withdrawal and correlative relief involved the provision of services. For other outcomes, this tax issue of provision of services is estimated to occur 49 percent of the time. Further research on how outcomes and tax issues may be related could also inform how APMA trains and assigns staff. Other analyses could examine the tax issue and whether an economist was assigned or the average processing time. These statistics may help identify insights into complex cases. Undertaking similar reviews across tax issues may help identify areas for increased scrutiny to ensure effective administration. Federal internal controls standards state that as part of an effective internal control system, management should establish activities to monitor program performance. Reliable information on program operations requires the collection of quality data. Collecting key characteristics and conducting relevant analyses would help ensure effective internal control and could help improve USCA’s management of MAP cases. Conclusions In a world with a growing number of international transactions, the United States needs an efficient and effective dispute resolution process to ensure that it is protecting the U.S. taxpayer and the U.S. corporate tax base. The MAP processes adopted by countries—including the United States—in their tax treaties are in place to prevent double taxation and ensure the accurate application of treaty provisions. USCA plays a key role in resolving disputes over double taxation but the agency has weaknesses in its processes that hamper its efforts. First, USCA has not provided clear guidance to taxpayers on how the MAP process works. As a result, taxpayers may be unaware of the process and not fully understand what to expect when they undergo it. Furthermore, USCA does not record when and for what reason there is contact between the taxpayer and USCA, therefore making it difficult for USCA to ensure that taxpayers are informed about the progress of their case. Second, USCA does not track the hours that analysts spend on cases and the milestones of cases. As a result, USCA does not have a full understanding of the efficiency of the MAP process, including ways to improve it. It also does not have processes to ensure the quality of the data it collects, therefore cannot ensure accurate performance measurement. While APMA aims to meet the minimum standards of the OECD, it does not analyze the data to identify areas for improvement. Analyses of USCA’s data could more fully inform its management decisions. A number of potential analyses are available of how cases are resolved. By forgoing these types of analyses, USCA may be unaware of certain trends, possible explanations for them, or any need to adjust guidance or resources to address these issues. Finally, many of APMA’s tasks depend on factors beyond its control (for example, the volume of taxpayer requests), but management of the processes could benefit from the collection and analysis of well-defined measures and quality data. Recommendations for Executive Action We are making the following eight recommendations to the IRS. The Commissioner of Internal Revenue should direct USCA to provide an overview of the MAP process that is more accessible and transparent than the Revenue Procedure. (Recommendation 1) The Commissioner of Internal Revenue should direct USCA to ensure that APMA staff record and track contact with taxpayers. (Recommendation 2) The Commissioner of Internal Revenue should direct USCA to ensure that APMA staff record and track the hours they spend on MAP cases. (Recommendation 3) The Commissioner of Internal Revenue should direct USCA to ensure that APMA identify and record the dates of key milestones throughout MAP case resolutions. (Recommendation 4) The Commissioner of Internal Revenue should direct USCA to ensure that APMA puts procedures in place to review the quality of inventory data. (Recommendation 5) The Commissioner of Internal Revenue should direct USCA to ensure that APMA records the dollar amounts of MAP case outcomes in its database. (Recommendation 6) The Commissioner of Internal Revenue should direct USCA to ensure that APMA analyzes trends in case characteristics as part of routine program management activities. (Recommendation 7) The Commissioner of Internal Revenue should direct USCA to ensure that APMA identify and record categories of the tax issue relevant in the dispute. (Recommendation 8) Agency Comments We provided a draft of this report to the Commissioner of Internal Revenue for review and comment. In its written comments, reprinted in appendix II, IRS agreed with our eight recommendations and will provide detailed corrective action plans in its 60-day letter response to Congress. IRS also provided technical comments, which we incorporated where appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of the Treasury, the Commissioner of Internal Revenue, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or mctiguej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Objectives Scope and Methodology As noted earlier, to assess the extent to which the Internal Revenue Service (IRS) evaluates management of dispute resolution cases, we interview IRS officials. Having determined that the Advanced Pricing Mutual Agreement Program (APMA) does not conduct analysis of mutual agreement procedure (MAP) case data, we used information from its existing inventory data to illustrate the types of analysis that may be possible. The inventory database APMA provided us contained all MAP cases that were closed from 2013 to 2017, as well as the current stock of open MAP cases. Because of a change in the method of recording the outcome variable between 2013 and 2014, we restricted our analysis of outcomes to 2014 to 2017. The inventory database did not include a variable for the tax issue in dispute. To illustrate the type of analysis that could be conducted if the tax issue were recorded we collected a sample of MAP case files. To estimate features such as tax issue and outcome for the inventory database, we selected a generalizable random sample of 84 cases that was proportionally allocated across four strata described in table 4. The strata included wither the initiating country was U.S. or Non-U.S. and whether an Economist was involved. This sample was selected from the population frame that consists of all files from APMA 2013-2017 Resolved and 2017 Pending inventory for cases resolved in years 2015 to 2017. Overall, this sample was designed to produce 95 percent confidence intervals for percentage estimates that are within approximately +/- 10 percentage points. The sample is not designed to provide estimates for other reporting groups at the same level of precision, and all margins of error are reported along with estimates. To create a tax issue variable, we reviewed the summary of competent authority issues required by Rev.Proc. 2015-40 to be included in the MAP request letter. We then allocated the tax issue described in the narrative to APMA’s advanced pricing agreement transaction categories. Some case files included multiple tax issues, but these cases accounted for less than 18 percent of the sample. The illustrations provided rely on the first tax issue noted in the narrative. Table 5 provides the estimates and margins of error for the categories. Appendix II: Comments from the Internal Revenue Service Appendix III: Illustrative Examples of Dispute Resolutions The following tables illustrate how a resolution can be reached in different types of disputes. Table 6 provides a hypothetical example of U.S.- initiated adjustment to a transfer price and a resolution that provides full relief from double taxation through a combination of partial withdrawal and correlative relief. In this example, the U.S. multinational corporation (MNC) parent sells a product to its subsidiary incorporated in a foreign country for $1,000. The U.S. parent is taxed on the income of $1,000 from the sale and the subsidiary is able to deduct that payment. The U.S. tax authority audits the parent’s return and determines that the price the parent sold the product for was too low and adjusts to price up from $1,000 to $2,000, resulting in an increase in taxable income. The U.S. MNC parent disputes the adjustment and requests assistance from the U.S. Competent Authority (USCA). The new adjusted transfer price results in $1,000 that is subject to double taxation because the foreign subsidiary has not deducted the additional $1,000 as the price paid to the U.S. parent, while the U.S. tax authority is now considering that income taxable. USCA negotiates with the foreign competent authority and the two parties agree on a revised transfer price of $1,600. The negotiated resolution results in USCA agreeing to withdraw $400 of the original adjusted amount of the transfer price. In turn, the foreign competent authority agrees to correlative relief in the form of an increased deduction of $600 of the additional price that the foreign subsidiary will pay the U.S. parent. The taxpayer receives full relief from double taxation since the total of the withdrawal and the correlative relief erases the $1,000 of double-taxed income that resulted from the increased adjustment. Alternatively, foreign tax authorities can make adjustments that affect a U.S. taxpayer. Table 7 provides a hypothetical example of a foreign initiated adjustment to a cost-sharing arrangement, and a resolution that provides full relief from double taxation, again, through a combination of partial withdrawal and correlative relief. In this scenario, the U.S. parent and its foreign subsidiary agree to share the costs of developing a product that will yield income of $10,000. As part of the agreement, the subsidiary will receive 10 percent of the income yield while the parent will receive 90 percent. The foreign tax authority audits the subsidiary’s tax return and determines that the amount of income assigned to the subsidiary is too low. It then adjusts the percentage to 50 percent, increasing the income allocated to the subsidiary from $1,000 to $5,000. This adjustment results in a potential $4,000 of income that is now subject to double taxation. The subsidiary decides that resolving this dispute locally is unlikely and through the U.S. parent requests assistance from USCA. USCA and the foreign competent authority negotiate a new allocation of 35 percent resulting in new income allocated to the subsidiary of $3,500. This resolution results in a combination of withdrawal and correlative relief. The competent authority agrees to withdraw $1,500 of the adjustment as income to the subsidiary, and the U.S. competent authority agrees to reduce the amount taxable to the parent by $2,500. The taxpayer receives full relief from double taxation since the total of the withdrawal and the correlative relief erases the $4,000 of double-taxed income that resulted from the increased adjustment. Appendix IV: Examples of Analysis that Advanced Pricing and Mutual Agreement Program Could Do with Current Available Data All mutual agreement procedure (MAP) cases are not the same in terms of complexity. One possible indicator of complexity is whether an economist was assigned to a case. United States Competent Authority (USCA) ranks the cases in order of complexity and assigns economists to the more complex cases. Our analysis of Advanced Pricing and Mutual Agreement Program (APMA) data in figure 8 shows how the use of economists varies by source of MAP cases. For most years, APMA assigned economists to a higher percentage of cases that involved U.S. than Canadian initiated adjustments. For most years, the share of economists assigned to foreign initiated cases was similar to U.S. initiated cases. However, in 2015 and 2016 the share of U.S. cases receiving an economist was more than double that of all foreign initiated cases. For most years, an economist was assigned to less than a quarter of foreign and U.S. MAP cases. We also analyzed USCA inventory data to compare the percentage of cases that were assigned an economist and the average time it took to resolve cases. As figure 9 shows, the average time a case was in processing tends to decrease when the percentage of cases that are assigned an economist increases. This relationship suggests that assigning economists to a case may reduce the time it takes to resolve it despite the greater complexity of the case. However, there may be many other factors that could influence processing time. APMA officials noted that many these factors include the readiness of the foreign competent authority to discuss the case in a timely fashion. Further analysis would be necessary to isolate the effects of specific resource allocation changes on process efficiency. Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgements In addition to the contact named above, Kevin Daly (Assistant Director), Jennifer G. Stratton (Analyst-in-Charge), Bertha Dong, Dawn Bidne, Michael Bechetti, Sonya Vartivarian, Ed Nannenhorn, David Dornisch, and A.J. Stephens made significant contributions to this report.
Why GAO Did This Study With increasing globalization, multinational corporations can take advantage of differences in countries' corporate tax systems to reduce their overall tax liabilities. However, globalization can also lead to disputes about the correct tax liability for U.S. MNCs in different countries. GAO was asked to review how the United States administers the process for resolving international tax disputes when a U.S. MNC disagrees with a tax determination of another country. This report (1) describes IRS's dispute resolution process, (2) assesses the information IRS provides to taxpayers about the process, and (3) assesses the extent to which IRS evaluates its management of dispute resolutions cases. GAO reviewed IRS guidance on the MAP process, interviewed IRS officials and compared IRS actions to federal standards for internal control and GAO's criteria for a good tax system. GAO analyzed MAP data for cases closed from 2013 to 2017 as well as a stratified random sample of MAP case files. What GAO Found A U.S. multinational corporation (MNC) operating in a foreign country is subject to taxes in that country as well as in the United States. The U.S. MNC's tax return may be audited by the United States or the other country. Such audits can result in an adjustment to the U.S. MNC's taxable income that may result in income being subject to tax in both countries. If the U.S. MNC disagrees with the adjustment, it can ask the United States Competent Authority (USCA) within the Internal Revenue Service (IRS) to help resolve the dispute through the mutual agreement procedure (MAP). Generally, disputes are resolved by one country withdrawing some or all of the adjustment and the other country providing other relief to the MNC to address double taxation of income. The following figure provides an overview of the dispute resolution process. Dispute resolution assistance is available to U.S.MNCs that need it and USCA provides comprehensive technical information on its website on how to request assistance. However, because USCA's website does not provide an overview or plain language guidance on the MAP process U.S. MNCs may not have clear information on how to navigate the process. USCA has taken a number of steps to ensure efficient management of MAP cases including assigning staff with requisite background and skills to cases according to their complexity and organizing staff into teams that specialize by countries. However, GAO identified a number of weaknesses that impact USCA's management of MAP cases. These include the following key data are not tracked and existing data are not used to assess the effective allocation of resources for the program, few controls have been established to monitor and ensure the reliability of the data in the case management database, and lack of trend analyses on dispute case characteristics that could help inform management decision making and the more efficient operation of the program. What GAO Recommends GAO is making a total of eight recommendations, including that IRS improve the clarity of information on the dispute resolution process, track and use dispute resolution case data, ensure the quality of case data, and analyze trends in dispute case characteristics. IRS agreed with GAO's recommendations and said it will provide detailed corrective action plans.
gao_GAO-18-222
gao_GAO-18-222_0
Background FHWA and FTA fund and oversee highway and transit projects, respectively. FHWA funds highway projects through formula grants to state DOTs, provides technical expertise to state DOTs, and conducts oversight of highway projects through its division offices in each state. FTA funds a variety of transit programs through formula and competitive grants and conducts oversight of transit projects’ planning and design through 10 regional offices. Completing major highway and transit projects involves complex processes that depend on a wide range of stakeholders conducting many tasks. Project sponsors—the state DOTs and local transit agencies—are the entities that develop the environmental review documents to be approved by the federal agencies. Examples of highway projects that may undergo environmental review are bridge construction or roadway repaving, and examples of transit projects include extension of light rail lines or construction of passenger ferry facilities. Project sponsors that do not use federal funds for a project generally do not need to meet NEPA requirements, but may still need to satisfy state or local environmental review requirements. As we have previously reported, highway projects typically include four phases, and transit projects also follow similar processes. 1. Planning: Project sponsors assess the need for a project in relation to other potential transportation needs. 2. Preliminary design and environmental review: Project sponsors identify potential transportation solutions based on identified needs, the potential environmental and social effects of those solutions, a project’s cost, and construction location. They then analyze the effect, if any, of the project and potential alternatives on the environment. Based on the analysis as well as public input the preferred alternative is selected. 3. Final design and right-of-way acquisition: Project sponsors finalize design plans and, if necessary, acquire private real property for the project right-of-way and relocate any affected residents and businesses. 4. Construction: Project sponsors award construction contracts, oversee construction, and accept the completed project. In the preliminary design and environmental review phase, many activities are to be carried out by the project sponsor pursuant to NEPA and other federal laws. NEPA’s two principal purposes are to ensure (1) that an agency carefully considers detailed information concerning significant environmental impacts and (2) that environmental information is available to public officials and citizens before decisions are made and actions are taken. For highway and transit projects, the project sponsor is responsible for preparing documentation showing the extent of the project’s environmental impacts, in accordance with NEPA, and determining which of the three following documentation types is needed: An environmental impact statement (EIS), the most comprehensive of the three documentation types, is required for projects that have a significant effect on the environment. In broad terms, the lead federal agency, FHWA or FTA, starts the EIS process by publishing a notice of intent in the Federal Register. The lead agency then must engage in an open process—inviting the participation of affected government agencies, Indian tribes, the proponent of the action, and other interested persons—for determining the scope of issues to be addressed and for identifying the significant issues related to a proposed action. The lead agency then is to coordinate as appropriate with resource agencies, such as the U.S. Army Corps of Engineers or the Fish and Wildlife Service, solicit comments from the public on a draft EIS, incorporate comment responses as appropriate into a final EIS, and issue a record of decision. Project sponsors are to prepare environmental assessments when, among other things, it is not clear whether a project is expected to have significant environmental impacts. An environmental assessment is intended to be a concise document that, among other things, briefly provides sufficient evidence and analysis for determining whether to prepare an EIS. If the agency determines that there are no significant impacts from the proposed action, then the agency prepares a Finding of No Significant Impact that presents the reasons why the agency made that determination. If the agency determines the project may cause significant environmental impacts, it conducts an EIS. Categorical exclusions refer to projects that would not individually or cumulatively have a significant effect on the environment. These projects generally require no or limited environmental review or documentation under NEPA. Examples of highway projects that are generally processed as categorical exclusions include resurfacing roads, constructing bicycle lanes, installing noise barriers, and landscaping. While FHWA and FTA are the federal agencies responsible for ensuring NEPA compliance on highway and transit projects, if certain requirements are met, FHWA or FTA may assign a state and that state may assume federal NEPA authority. States assume this authority subject to the same procedural and substantive requirements as would apply to FHWA or FTA. Specifically, the NEPA Assignment Authority provision provides authority for FHWA to assign federal NEPA authority to states for approving an EIS, environmental assessment, or categorical exclusion. States must apply to FHWA or FTA, which reviews the state’s suitability to assume the authority based on meeting certain regulatory requirements and the state’s capability to assume the responsibility. States must enter into a written memorandum of understanding (MOU) and must, among other things, expressly consent to the jurisdiction of federal courts by waiving sovereign immunity for any responsibility assumed for NEPA. The MOU is for a term of not more than 5 years and is renewable. MOUs are unique to each state; however they all contain certain sections such as assignments of authority, acceptance of jurisdiction, and performance measures. For the first 4 years, FHWA is to conduct an annual audit to ensure compliance with the MOU, including compliance with all federal laws. After the fourth year, FHWA is to continue to monitor state compliance with the MOU, using a more limited review. In prior reports, we identified a number of factors that can affect the length of time required to complete transportation projects. For highway projects, we found that the large number of stakeholders and steps (which include environmental reviews) in the project delivery process, availability of funding, changing priorities, and public opposition can lead to longer project time frames. For transit projects, we found that local factors specific to each project determine the project development time frame, including the extent of community support and extent of local planning prior to approval of funding. We found that for 32 projects we reviewed, the environmental review process was tied with stakeholder coordination as the third most frequently cited factor by transit project sponsors contributing to the length of the project development process. The Three Most Recent Transportation Authorizations Included Numerous Provisions for Accelerating Highway and Transit Project Delivery We identified 34 project delivery provisions that apply to highway projects and 29 such provisions that apply to transit projects. These provisions are intended to streamline various aspects of the NEPA process, making it more efficient and timely. Most of the provisions apply to both types of projects. Based on our review, we grouped the provisions into four general categories: Accelerated NEPA Review, Administrative and Coordination Changes, NEPA Assignment, and Advance Planning (see table 1). See appendix III for the full list and a description of each project delivery provision we identified. The Accelerated NEPA Review category’s provisions generally establish certain conditions that permit projects, if the specific conditions are applicable, to exclude certain actions from a more detailed NEPA review. For instance, these provisions are primarily comprised of new categorical exclusions. Additionally, the Minor Impacts to Protected Public Land provision authorizes a historic site, parkland, or refuge to be used for a transportation project if that project is determined to have a de minimis impact on the environment. The Administrative and Coordination Changes category’s provisions are more process oriented. These provisions, for example: (1) establish time frames for parts of the NEPA review process, (2) encourage the use of planning documents and programmatic plans as well as a coordination plan for public and federal agency participation in the environmental review process, and (3) seek to avoid duplication in NEPA review documents. The NEPA Assignment category’s provisions authorize FHWA or FTA, as discussed above, to assign their NEPA authority to states. The first of the two provisions—the ‘NEPA Assignment Authority’ provision—authorizes FHWA or FTA to assign federal NEPA authority to states for reviewing EIS, environmental assessment, and some categorical exclusion reviews, so long as the categorical exclusion does not require an air-quality review that involves the Environmental Protection Agency. The second provision—the Categorical Exclusion Determination Authority provision— allows FHWA or FTA to assign limited NEPA authority to states to review categorical exclusions. This authority can apply to categorical exclusions with air-quality reviews, as well as all other categorical exclusions. The Advance Planning category’s provisions are not part of the agency’s environmental review process and are not applicable to transit projects. These provisions allow for certain activities in the highway project development cycle, such as land acquisition, to occur prior to NEPA approval. The three provisions in this category include the following: The Advance Design-Build Contracting provision permits a state to release requests for proposals and award design-build contracts prior to completing the NEPA process; however, a contractor may not proceed with final design or construction during the NEPA process. The Advance Acquisition of Real Property provision authorizes states to acquire real property interests, such as land, for a project before completion of the NEPA process. The 2-phase Contracts provision authorizes the awarding of contracts on a competitive basis for preconstruction services and preliminary project design before the completion of the NEPA process. Most of the project delivery provisions are optional, which we define to mean that the relevant entities (a federal agency or state or local transportation agency), can choose to use the provision if circumstances allow. For example, a state highway project within an existing operational right-of-way may have the option to use the categorical exclusion for projects within an existing operational right-of-way. Specifically, 22 of the 34 highway project delivery provisions and 17 of the 29 transit project delivery provisions are optional. By contrast, 12 provisions are requirements for both highway and transit projects, which we define to mean that federal agencies, or state or local transportation agencies that are subject to a provision must adhere to the requirements and obligations in the provision, if all the conditions for its use have been satisfied. Required provisions are primarily contained in the Administrative and Coordination Changes category. For example, for highway projects, the Programmatic Agreements for Efficient Environmental Review provision, enacted in 2012, requires FHWA to seek opportunities with states to enter into agreements that establish streamlined processes for handling routine projects, such as highway repair. Prior to 2012, FHWA actively encouraged programmatic agreements between state DOTs and FHWA division offices, but seeking opportunities to enter such agreements were not required. State DOTs Reported That a Number of Provisions They Used Sped Up Highway Project Delivery, While for Most Selected Transit Agencies Effects Were Unclear More Than Half of Optional Provisions Were Reported to Be Used by a Majority of State DOTs on Highway Projects According to survey responses, 10 of the 17 optional provisions included in the survey—which primarily fall under the Accelerated NEPA Review category—were each used by 30 or more state DOTs (see fig. 1). Fifty state DOTs reported using the Minor Impacts to Protected Public Land provision—the most of any of the provisions. Some of the less widely used provisions—the 7 provisions reported to be used by 21 or fewer states—only apply to specific circumstances or highway projects that many state DOTs undertake less frequently. For example, the Categorical Exclusion for FHWA-funded Ferry Facility Rehabilitation or Reconstruction provision would only apply to states that operate ferry services, a circumstance that may explain its relatively low use. Also, for 3 of these 7 provisions, 10 or more states reported that they plan to use the provision in the future. For example, while 21 state DOTs used the Reduce Duplication by Eliminating Detailed Consideration of Alternative Actions provision, an additional 17 state DOTs reported that they plan to use it. All of the optional provisions were reported to be used by at least 14 state DOTs. Some states reported that they have not used certain provisions and have no plans to do so. Our survey served as a nationwide review of the use of the provisions and was not designed to determine why each state did or did not use each provision. However, our discussions with selected states and optional comments provided in the survey provided some additional insight into states’ use of the provisions. Officials at some state DOTs reported that they had not used certain categorical exclusions because other categorical exclusions could also apply to those projects. Specifically, officials in 4 state DOTs told us that they did not use 4 categorical exclusion provisions for this reason. For example, officials at the Colorado DOT said that the Categorical Exclusion for Geotechnical and Archeological Investigations provision has not been used in Colorado because other categorical exclusions were more applicable. Similarly, officials at the Oklahoma DOT said that they had not used the Categorical Exclusion for Projects within the Existing Operational Right-of-Way provision because most of those projects already qualify for a categorical exclusion under other criteria. For other provisions, such as the Categorical Exclusion for Multimodal Projects provision, some state DOTs, such as the Nebraska DOT, indicated that they do not conduct multimodal projects and have no plans to do so for the foreseeable future. About Two-Thirds of the Optional Provisions Reportedly Sped Up Highway Project Delivery for the Majority of Users For 11 of the 17 optional provisions included in our survey, a majority of state DOTs that indicated they used the provisions (users) reported that the provisions sped up project delivery (see fig. 2). Over 90 percent of users of the Minor Impacts to Protected Public Land provision reported that it sped up project delivery (46 out of 50 state DOTs using the provision). FHWA officials said that without the Minor Impacts to Protected Public Land provision, a state DOT would need to complete an environmental assessment to show that performing even a small project, such as adding a small bus stop on the periphery of a park, would not have significant effects on the environment. The Minor Impacts to Protected Public Land provision now allows a state DOT to complete transportation projects that have a minimal environmental effect on historic sites and parklands more quickly because the state DOT can bypass the environmental assessment process. In our survey and discussions with state DOTs, some officials noted how much time the provision can help them save. Officials at the Virginia DOT estimated that a 9-month to 1-year review could be cut to 2 to 4 months. An official at the Colorado DOT said that reviews that used to take 6 months now take 30 days. And officials at the Mississippi DOT said that they used the provision when adding turn lanes near parks and were able to bypass a review process that previously took 6 to 12 months. Other examples of sped-up project delivery provided by state DOTs include the following: Categorical Exclusion in Emergencies provision: Mississippi DOT officials said that this provision has been helpful, particularly given project delivery lessons learned since Hurricane Katrina. They said the provision allows the state DOT to use a categorical exclusion, which takes 6 to 8 months for some projects, in place of an environmental assessment, which can take 12 to 18 months and involves additional review steps such as providing evidence and analysis as to why a project does not require an EIS. Use of Federal Highway or Transit Funds to Support Agencies Participating in the Environmental Review Process provision: Arizona DOT officials said that the state DOT funds positions in the Army Corps of Engineers and the Fish and Wildlife Service that help lessen the time it takes for those agencies to provide comments on Arizona DOT project’s NEPA reviews. The officials estimated these positions reduce review time by about one month compared to when these agencies did not have Arizona DOT-funded positions. For the remaining six optional provisions, 41 to 58 percent of users reported that the provisions had no effect on project delivery. Based on discussions with selected state DOTs and comments included with survey responses, officials at some state DOTs reported that the provisions did not have any effect because the states had already developed similar processes, either through programmatic agreements with their FHWA division office or at their own initiative. As a result, the state DOTs did not realize any new time savings after the provisions were enacted in law. For example, for each of three provisions that allow for certain documentation to be eliminated for categorical exclusions, officials at seven state DOTs reported that they had already developed similar processes through programmatic agreements with their FHWA division office. Further, five state DOTs reported that the Early Coordination Activities in Environmental Review Process provision had no effect because they already had a similar coordination process in place. Some states used such a process at their own initiative and others in conjunction with their FHWA division office. Among Required Provisions, about Three- Quarters of State DOTs Reported That “Programmatic Agreements” Helped Speed Up Highway Projects, While the Effects Are Mixed for Other Provisions Of the 12 required provisions—which fall into the Administrative and Coordination Change category—only the Programmatic Agreements for Efficient Environmental Review provision was reported by a majority of state DOTs (39) to have sped up project delivery (see fig. 3). For example, officials at the Mississippi DOT reported that a programmatic agreement with the FHWA division office can allow it to save 6 to 8 months when processing categorical exclusions for projects with minimal right-of-way acquisition. They explained that they no longer had to wait for the FHWA division office to process the categorical exclusion. As previously discussed, prior to 2012, FHWA actively encouraged, but did not require, programmatic agreements between state DOTs and FHWA division offices. In interviews and optional comments from the survey, officials reported that programmatic agreements, both those entered into before and after the enactment of the provision, had sped up project delivery. We did not determine the number of state DOTs that attributed the speed up in project delivery to the 2012 provision, as opposed to those who attributed it to the earlier programmatic agreements with their FHWA division offices. All of the required provisions reportedly sped up project delivery for at least 4 state DOTs. For 5 of the 12 provisions, between 10 and 18 states responded that the provisions sped up project delivery. For example, officials at the Ohio DOT estimated that the Combine Final Environmental Impact Statement and Record of Decision in Certain Cases provision saves them a minimum of 3 months. For the remaining 6 provisions, between 4 and 7 states reported that the provisions sped up project delivery, but each of these provisions also had at least 16 states that reported the provision had no effect on project delivery. Our survey served as a broad-based review of the effects of the provisions and was not designed to determine why each provision had the reported effects; however, some states provided voluntary comments in the survey. As with various optional provisions, some state DOT officials reported no effect because the state had already developed processes and practices that they said achieved what the provisions formalized, for example: Coordination Plan for Public and Agency Participation provision: In discussions and from optional comments, 4 state DOTs said that they already had a similar process in place. Officials at the Louisiana DOT stated that they performed a similar process prior to the ‘Coordination Plan for Public and Agency Participation’ provision’s enactment in law in an effort to coordinate with the public and other government agencies. 45-Day Limit to Identify Resource Agencies provision: In interviews and optional survey comments, officials at 2 state DOTs said that they already had a similar process in place to promptly identify stakeholder agencies. Issue Resolution Process provision: Wyoming DOT officials said that they had been performing a similar process prior to this provision’s enactment in law to ensure consensus among stakeholders. Some state DOTs reported that it was too early to determine the effects of several provisions, particularly more recently enacted provisions. For 5 of the 12 required provisions, more than one third of state DOTs (over 17 states) reported that it was too soon to judge the provisions’ effects. Four of these 5 provisions were enacted in the FAST Act in 2015. Consequently, state DOTs that used the provision had a short window of time to assess any potential effect on project delivery—particularly given that highway projects often take a number of years to complete. Also, while our survey did not ask state DOTs when they had most recently initiated an EIS, several state DOTs voluntarily noted that they had not done so since the FAST Act. Certain provisions apply only to projects undergoing an EIS; states that have not done an EIS since such provisions were enacted would not have had the opportunity to use the provision. One such provision is the 45-Day Limit to Identify Resource Agencies provision, for which 19 state DOTs reported that it was too early to judge the effects. For 5 of the 12 provisions, a relatively few state DOTs, between one and eight, reported that the provision had slowed down project delivery. Eight states reported that the Coordination Plan for Public and Agency Participation provision slowed down project delivery, the most for any provision. According to the Minnesota DOT, this provision slowed down project delivery because it formalized and required a specific coordination process in addition to those that had already been voluntarily occurring with relevant federal and state resource agencies. Formalizing this process resulted in resource agencies taking longer to provide responses to the Minnesota DOT. Other states similarly said that this provision’s additional formal processes slowed down project delivery. We defined required provisions to mean that federal agencies or state or local transportation agencies that are subject to the provision must adhere to requirements and obligations in the provision, if all the conditions for its use have been satisfied. States may not have had the opportunity to apply some of the required provisions that apply to them because they did not have exposure to the circumstances and conditions that would invoke this provision’s use. For example, a state would not be exposed to the 150-Day Statute of Limitations provision if it had not been subject to a lawsuit. Unlike the optional provisions, we did not ask states whether they elected to use the required provisions since state DOTs, if subject to the provision, must adhere to the requirements and obligations in the provision. Selected State DOTs Reported Using the Three Advance Planning Provisions That Affect Project Delivery but Precede NEPA Review Two of the three provisions from the Advance Planning category were used by a majority of the 10 state DOTs we interviewed, and most of the state DOTs that used each provision stated that it sped up project delivery. This use is illustrated more specifically: Advance Design-Build Contracting provision: 8 state DOTs used this provision, 5 of which reported it sped up highway project delivery. Advance Acquisition of Real Property provision: 6 state DOTs used this provision, 4 of which reported it sped up highway project delivery. 2-phase Contracts provision: 5 state DOTs used this provision, 4 of which reported it sped up highway project delivery. Some state DOT officials provided examples of how the provisions affected their project delivery. For example, California DOT officials said that the Advance Acquisition of Real Property provision saved them a few months on small projects, involving one or two parcels of land; for a large project involving hundreds of commercial and residential parcels, they estimated time savings of more than a year. Similarly, Illinois DOT officials said that the provision has yielded time savings of 6 months to a year in instances where the DOT needs to purchase residential property. Most Project Delivery Provisions Were Used by Selected Transit Agencies, but the Provisions’ Effects on Project Delivery Were Generally Unclear More than two-thirds of the provisions designed to speed up transit project delivery were reportedly used by 11 selected transit agencies. We asked officials in selected transit agencies to report their use of 29 project delivery provisions applicable to transit agencies, 17 of which are optional and 12 of which are required. Of the 29 provisions, 6 were used by 4 or more selected transit agencies (see fig. 4). The most used optional provision, by 7 transit agencies, was the Minor Impacts to Protected Public Land provision described earlier followed by the Planning Documents Used in NEPA Review provision, used by 6 transit agencies. Some transit agencies told us that the provisions they used sped up project delivery. In addition, some provided estimated time savings. Chicago Transit Authority (CTA) officials told us that the Minor Impacts to Protected Public Land provision was extremely helpful for recent CTA projects involving historic properties. For example, CTA has implemented projects that involve track work at a station that is adjacent to a historic boulevard. They estimated that the Minor Impacts to Protected Public Land provision has reduced the time to complete documentation by several months. Similarly, a Tri-County Metropolitan Transportation District of Oregon official stated the Minor Impacts to Protected Public Land provision has been instrumental since in the past, the agency would have to stop the project if it affected a park land. Southeastern Pennsylvania Transportation Authority (SEPTA) officials told us that they used the Categorical Exclusion for Minor Rail Realignment provision one or two times within the past 2 years. SEPTA estimated the provision saved the agency several months in time savings per project. Officials stated that the provision allowed the SEPTA to use a categorical exclusion in place of an environment assessment. SEPTA officials also said they saved staff time and approximately $100,000 a year in consultant fees and agency staff resources by using the Categorical Exclusion for Preventative Maintenance to Culverts and Channels provision. Capital Metro officials in Austin, Texas, told us they used the Categorical Exclusion for Projects within the Existing Operational Right-of-Way provision for a rail right-of-way project. They estimated the provision helped save at least 4 to 6 months in project delivery because the agency was not required to do an environmental assessment. While some selected transit agencies reported using some provisions and added that this provision’s use helped speed up project delivery or lower costs, the effects of the provisions—whether they sped up project delivery or streamlined the NEPA review process—were not clear to a majority of the selected transit agencies. Because transit agencies in our review do not track NEPA reviews—including their start and end dates—they were not able to assess how project time frames or costs were affected by the provisions. Officials from several selected transit agencies told us that their understanding of the project delivery provisions’ effects was also limited by their reliance on engineering and environmental-planning consultants to prepare their NEPA documents. Officials from 4 of the 11 transit agencies told us that they rely on these consultants’ knowledge of the provisions to prepare their NEPA documents. Further, officials from 1 transit agency said they required the assistance of their consultants to respond to our requests for information. Nine of the 29 provisions were not used by any of the agencies, and no provision was used by more than 7 agencies. Our discussions with selected transit agency and FTA officials provided some insight into transit agencies’ use of the provisions, specifically: Limited transit projects needing EISs: Transit agencies that do not prepare EISs may have fewer opportunities to use some of the provisions. Following discussions with FTA officials, we examined the number of times transit agencies filed a notice of intent to prepare an EIS in the Federal Register from 2005 through 2016 as a proxy to identify those transit agencies that would likely use a number of the project delivery provisions. We found that 48 transit agencies (out of several hundreds of transit agencies) filed notices of intent from fiscal year 2005 through 2016 but that of the 48 transit agencies, 34 had filed a notice of intent only once during that time. In general, the vast majority of transit agencies have little recent experience preparing EIS documentation and using the provisions that are triggered by an EIS. For example, only one transit agency (Tri-County Metropolitan Transportation District of Oregon) had filed a notice of intent to prepare an EIS after the FAST Act was enacted in 2015. Duration of transit projects: Some instances where transit project delivery provisions were not used could be due to the number of years it takes to complete transit projects. According to FTA officials, where sponsors for highway projects may have new projects initiating and requiring NEPA reviews on a rolling basis, transit agencies operate differently. A transit agency may have a project that goes through a NEPA review and then begins construction of the project that can last a number of years. The transit agency may not have another project that requires an EIS for several years. For example, New York Metropolitan Transportation Authority (MTA), the largest transit agency by ridership in the country, completed its last EIS review in 2004 and has since been working on construction of that project, according to FTA officials. While MTA has been receiving FTA funds for construction, no additional project has undergone an EIS. Changing provisions and delayed guidance: Some transit agency officials told us that the changing provisions across the three enacted surface transportation authorization acts pose challenges to using the project delivery provisions. Understanding the changes in the project delivery provisions—for example, changes in categorical exclusions— included in SAFETEA-LU, MAP-21, and the FAST Act was challenging according to some selected transit agencies. Further, some transit agency officials stated that the lag time in receiving guidance from FTA on the changing provisions also posed challenges to using some of the provisions. DOT’s FHWA Has Assigned Six States NEPA Authority, and Two States Reported Time Savings, but FHWA Has Not Provided Guidance on Measuring Effects DOT, specifically FHWA, has assigned its NEPA approval authority to six states, and other states are interested in this authority. Of the six states, California and Texas have completed some NEPA reviews and determined they have achieved time savings through state approval of NEPA documents rather than federal approval. However, we found the reported time savings to be questionable for several reasons, including challenges faced by California and Texas in establishing sound baselines for comparison. Despite this finding, the reported time-savings information is used by other states to seek out NEPA authority and in reporting to DOT and Congress. FHWA focuses its oversight of NEPA assignment states on ensuring these states have the processes in place to carry out FHWA’s NEPA responsibilities, according to a written agreement between each state and FHWA, and does not focus on determining whether states are achieving time savings. FHWA Has Assigned Six States NEPA Authority, and Additional States Are Interested FHWA has assigned its NEPA authority to six states, enabling those state DOTs to assume FHWA’s authority and approve state-prepared NEPA documentation for highway projects, in lieu of seeking federal approval. California’s NEPA authority began in 2007, as the first state in the then- pilot program, and continued when the program was made permanent in 2012. Once eligibility expanded to all states, Texas became the second state to be assigned NEPA authority, in 2014, followed more recently by Ohio in 2015, Florida in 2016, and Utah and Alaska in 2017. The 2005 Conference Report accompanying SAFETEA-LU indicates that the NEPA Assignment Authority provision was created to achieve more efficient and timely environmental reviews, which are a key benefit sought by participating states. The report states that the NEPA assignment program was initially created as a pilot program to provide information to Congress and the public as to whether delegation of DOT’s environmental review responsibilities resulted in more efficient environmental reviews. In addition, in MAP-21, Congress declared that it is in the national interest to expedite the delivery of surface transportation projects by substantially reducing the average length of the environmental review process. State DOT officials from the five NEPA assignment states we reviewed cited anticipated time savings or greater efficiency in environmental review as a reason for taking on this authority. For example, Texas DOT officials said they expected to save time by eliminating FHWA approval processes that they described as time consuming. With NEPA authority, the state puts in place its own approval processes to carry out the federal government’s NEPA review responsibilities, and agrees to take on the risk of legal liability for decisions made in this capacity. Additional states have expressed interest and have taken steps to apply for NEPA authority. Officials from three state DOTs told us they plan to apply for NEPA authority, and one of these, the Arizona DOT, has taken the first step in the process and obtained the requisite changes in state law. In explaining the anticipated benefits of NEPA assignment to the state legislature, an Arizona DOT official cited time savings reported by California and Texas as a reason for taking on the application process. Time savings’ results had been shared by California and Texas DOT officials during a peer exchange event held by an association of state highway officials in 2015 for states that are in the early stages or are considering applying for NEPA authority. Also, the Texas DOT had testified before a congressional committee in 2015 and described the time savings for environmental assessment reviews under its NEPA authority and its role communicating this information to other states pursuing NEPA authority. State DOTs Calculate Time Savings, but Reported Savings Are Questionable The MOUs, signed with FHWA by each of the five states we reviewed, set out performance measures for comparing the time of completion for NEPA approvals before and after the assumption of NEPA responsibilities by the states. To calculate time savings, each state has established a baseline—of the time it took to complete NEPA review before it assumed NEPA authority—to compare to the time it takes to complete NEPA review after assuming NEPA authority. The baseline is to serve as a key reference point in determining the efficiency of state-led NEPA reviews. Thus far, the two states that have had NEPA authority long enough to report results are California and Texas, and only California has reported results for EISs. The California DOT reported that its EIS reviews now take about 6 years to approve, which it determined to be a 10-year improvement over the 16-year (15.9 years) baseline the state DOT established. For environmental assessment reviews, the California DOT reported completion times of about 3.5 years, which it determined to be a 1-year improvement over the established baseline. The Texas DOT has not started and completed an EIS review since assuming NEPA authority but reported that its environmental assessment reviews have taken about 1.5 years, compared to the baseline of almost 2.5 years. However, we found California and Texas DOTs’ reported time savings to be questionable due to the methods used to compare time frames and challenges associated with establishing baselines. First, there is an inherent weakness in comparing the NEPA review time frames before and after NEPA authority because the comparison does not isolate the effect of assuming NEPA authority on NEPA review time frames from other possible factors. As discussed earlier, we have previously found that such factors include the extent of public opposition to a project and changes in transportation priorities, among other factors. Further, according to a report from the American Association of State Highway and Transportation Officials, such a comparison does not include information to control for non-environmental factors that are important to project delivery time frames, including delay in completion of design work necessary to advance the environmental review and changes in project funding that put a project on hold. Moreover, neither California nor Texas DOTs’ time frame comparisons isolate the effects of NEPA assignment from other streamlining initiatives that may have helped accelerate delivery of projects, such as potential benefits realized from other project delivery provisions. Second, California and Texas have faced challenges creating appropriate baselines. States are responsible for determining how many and which projects to include in baseline calculations and adopting their own methodologies. While circumstances and conditions are different across states and states can be expected to have different experiences, California’s current 16-year EIS baseline is over double that of Texas’ EIS baseline. In 2012, we found that for the 32 projects in which FHWA was the lead agency and signed the EIS in fiscal year 2009, the average time to complete the process was about 7 years. According to information contained in California DOT reports to the state legislature from 2007 and 2009, California’s original baseline for EISs was comprised of 1 project that resulted in an EIS baseline of 2.5 years. In 2009 state DOT officials increased the number of EIS projects in order to achieve what they viewed as a more representative mix. This process increased California’s EIS baseline six-fold, which has been consistently used since that time. Specifically, California used the median of five projects that had review times of around 2.5 years, 6.2 years, 15.9 years, 16.6 years, and 17.3 years. These projects were selected because they were among the final EIS projects that were reviewed prior to California’s assuming NEPA authority. However, the EIS baseline may not be meaningful. First, it includes outlier projects, which are projects that take much longer than usual to complete. According to California DOT officials, this factor is a limitation to determining time savings because the outliers increased the EIS baseline and therefore makes subsequent time savings look greater than they are. Next, despite the increase in EIS projects included in the baseline, a 2016 California DOT report to the state legislature stated that this new EIS baseline may still not be meaningful because of the relatively small sample size, and therefore the inferences that can be made from EIS analysis on time savings are limited. The report caveats that “the EIS analysis should not be used as a major indicator of the effectiveness of NEPA assignment,” but still reports the EIS analysis results. However, California DOT uses the figure in determining and reporting time savings. For example, information available on the California DOT’s web site as of November 2017 presents these data and states that they are evidence of saving “significant time in reviewing and approving its NEPA documents since undertaking NEPA assignment.” Moreover, the California DOT’s reported median time frame of 6 years for EIS reviews only accounts for those projects that have both started and completed their environmental review since the state assumed NEPA authority. As only 10 years have passed since California assumed NEPA authority in 2007, all EIS reviews started and completed since 2007 automatically have shorter time frames than the 16-year baseline. Thus, it will be 2023 before any EIS reviews in California could equal the baseline, let alone exceed it, making any EIS review started after assumption of NEPA authority and completed before 2023 appear to demonstrate time savings. Texas DOT officials stated that they had challenges determining a baseline for environmental assessments because there is no nationally accepted standard definition of when an environmental assessment begins. Moreover, Texas DOT recently revised its environmental assessment baseline, reducing it from 3 years to 2.5 years and including projects over a 2-year period instead of a longer 3-year period due to uncertainties with quality of the older data, according to Texas DOT officials. Texas also included, then excluded three outliers from its revised baseline (reviews that took between 6 and 9 years to complete) because officials determined they were not representative of typical environmental assessment reviews. While improving project data to create more accurate baselines is beneficial, it also results in different time savings’ estimates over time and illustrates the challenges of constructing sound baselines. As previously discussed, states that are considering or have recently decided to assume NEPA assignment authority have relied, at least in part, on time savings reported by California and Texas. As additional NEPA assignment states begin calculating and reporting time savings as outlined in their MOUs with FHWA, the inherent weakness of a pre- and post-assignment baseline comparison, combined with challenges establishing sound baselines, creates the potential for questionable information about the program’s effects to be reported and relied upon by other states considering applying for NEPA assignment. Questionable information also negatively affects DOT’s and Congress’ ability to determine whether NEPA assignment is having its intended effect and resulting in more efficient environmental reviews. FHWA Has Focused on States’ Compliance and Processes but Has Played a Limited Role in Time Savings Measures FHWA focuses its oversight of NEPA assignment states through audits and monitoring to ensure that states have the processes in place to carry out FHWA’s role in the NEPA process and that they comply with the MOU agreed to between FHWA and each of the NEPA assignment states. According to the MOUs, FHWA’s annual audits include evaluating the attainment of performance measures contained in each MOU. Each of the five MOUs contains four performance measures including: (1) documenting compliance with NEPA and other federal laws and regulations, (2) maintaining internal quality control and assurance measures for NEPA decisions including legal reviews, (3) fostering communication with other agencies and the general public, and (4) documenting efficiency and timeliness in the NEPA process by comparing the completion of NEPA documents and approvals before and after NEPA assignment. According to FHWA officials, the agency interprets evaluating the attainment of performance measures contained in the MOU as ensuring that the state has a process in place to assess attainment. For the efficiency and timeliness measures, FHWA does not use its audits to measure whether the state is achieving performance goals. FHWA only ensures that the state has a process in place to track the completion of NEPA documents and approvals before and after NEPA assignment, and that states follow the process, according to FHWA officials. For example, FHWA officials from the California division office stated that they did not assess the baseline methodology or assess its validity or accuracy. FHWA’s Texas division officials added that setting the baseline has not been an FHWA role. FHWA does not assess or collect information on states’ calculations of their time savings from NEPA assignment. FHWA officials stated that their focused approach on compliance and processes is consistent with the authority they have been granted and that it is not required by statute to measure environmental review efficiency and timeliness performance of participating states. Moreover, according to these officials, this authority limits their ability to request state information on issues related to, and otherwise assess, states’ performance measures, including time savings, specifically: According to an FHWA program document, FHWA is statutorily authorized to require the state to provide any information that FHWA reasonably considers necessary to ensure that the state is adequately carrying out the responsibilities assigned to the state. Further, a request for information is reasonable if it pertains to FHWA’s reviewing the performance of the state in assuming NEPA assignment responsibilities. However, FHWA officials told us they do not consider an assessment of efficiency and timeliness measures to be necessary to ensure that the state is adequately carrying out its responsibilities. Additionally, FHWA considers timeliness performance measures to be a state role. FHWA officials told us that the timeliness performance measures in the NEPA assignment MOUs were added by the states, not FHWA. For instance, California added a timeliness performance measure based on its state legislature’s reporting requirements. Each of the subsequent four NEPA assignment states we reviewed also included timeliness performance measures in their respective MOUs. However, the DOT Office of Inspector General reported in 2017 that while FHWA is not statutorily required to measure performance regarding the environmental review process for NEPA assignment states, the lack of data collection and tracking inhibits FHWA’s ability to measure the effectiveness of NEPA assignment in accelerating project delivery. The DOT Office of Inspector General recommended that FHWA develop and implement an oversight mechanism to periodically evaluate the performance of NEPA assignment states, which has not yet been implemented. While FHWA does not, according to officials, have the authority to assess states’ measurement of timeliness performance, FHWA has a role and the authority to provide guidance or technical assistance to states to help find solutions to particular problems and to ensure complete and quality information is provided to Congress, state DOTs, and the public to help make informed policy choices. Federal standards for internal control state that agencies should use quality information to determine the extent to which they are achieving their intended program outcomes. Characteristics of quality information include complete, appropriate, and accurate information that helps management make informed decisions and evaluate the entity’s performance in achieving strategic outcomes. FHWA’s mission to advance the federal-aid highway program is articulated in its national leadership strategic goal, which states that FHWA “leads in developing and advocating solutions to national transportation needs.” To carry out its mission, FHWA engages in a range of activities to assist state DOTs in guiding projects through construction to improve the highway system. Specifically, according to agency documents, FHWA provides technical assistance and training to state DOTs and works with states to identify issues and develop and advocate solutions. Its broad authority to offer guidance and technical assistance can include helping states develop sound program methodologies. Such assistance or guidance could also include sharing best practices and lessons learned on evaluation methodologies, including creation of baselines, and potentially result in better quality information to assess the results of NEPA assignment. Without quality information reported from NEPA assignment states on time savings, questionable information about the program effects may be relied upon by other states considering applying for NEPA authority, and may negatively impact DOT’s and Congress’ ability to determine whether NEPA assignment is having its intended effect and resulting in more efficient environmental reviews. FHWA officials stated that they advise NEPA assignment states on process improvements and technical assistance, but that no state has requested assistance developing evaluation methodologies or baselines. However, offering guidance or technical assistance on evaluation methodologies to measure time savings can help ensure that states are basing decisions to participate on reliable information and that, in turn, those NEPA assignment states can provide reliable information to FHWA and Congress to help assess whether NEPA assignment results in more efficient environmental reviews. Conclusions A number of factors can affect the time it takes to complete highway and transit projects, including the NEPA review process. Congress has stated that it is in the national interest to expedite the delivery of surface transportation projects by substantially reducing the average length of the environmental review process, and has taken a number of steps in this direction, including allowing DOT to assign NEPA authority to the states. We found that the time savings results publicly shared by current NEPA assignment states have spurred interest among other states seeking NEPA authority. However, states are making program decisions—taking on risk and assuming federal authority—based on questionable information and reports of success. Given questions about participating states’ reported time savings, FHWA can help provide some assurance that the performance measures states develop and use to report out are based on sound methodologies. FHWA has the authority to issue program guidance and offer and provide technical assistance to help state DOTs find solutions to particular problems, including the development of sound evaluation methodologies. Without such assistance, states may continue to face difficulties establishing sound baselines. And without a sound baseline, the time savings states calculate—which may continue to be subsequently publicly reported—may be of questionable accuracy and value. And Congress, in turn, would not have reliable information on whether the assignment of NEPA authority to states is having its intended effect. Recommendation for Executive Action The FHWA Administrator should offer and provide guidance or technical assistance to NEPA assignment states on developing evaluation methodologies, including baseline time frames and timeliness measures. (Recommendation 1) Agency Comments and Our Evaluation We provided a draft of this report to DOT for review and comment. DOT provided a written response (see app. VI), as well as technical comments, which we incorporated as appropriate. DOT partially concurred with our recommendation. Specifically, DOT stated that it would clarify environmental review start times and communicate this to all FHWA divisions and states. DOT also stated it would provide the NEPA assignment states with any new federal government-wide guidance developed on performance measures of environmental reviews. DOT also stated that it already provides technical assistance to NEPA assignment states in other areas and that FHWA is not required by statute to measure the environmental review efficiency and timeliness of NEPA assignment states. Further, DOT stated that focusing only on timeliness metrics for environmental reviews overlooks other significant benefits of NEPA assignment, such as state control over when and how to conduct environmental reviews, which according to DOT is one of the most significant factors that a state considers in deciding whether to request NEPA assignment authority. We are encouraged that DOT stated it would clarify environmental review start times. This step can improve the accuracy of environmental assessment review time frames, which is a part of developing sound baselines. In addition, while providing general guidance related to performance measures of environmental reviews would be helpful, we continue to believe that FHWA needs to provide further guidance or technical assistance to NEPA assignment states on developing sound evaluation methodologies. We recognize that FHWA has stated that it is not required by statute to measure environmental review efficiency; however, FHWA does have broad authority to offer guidance and technical assistance to help states develop sound program methodologies, including sharing practices and lessons learned on evaluation methodologies. As we reported, Congress indicated its interest in more efficient and timely environmental reviews when it created the NEPA assignment program. FHWA can help provide reasonable assurance that the performance measures states develop and use to report information are based on sound methodologies, which would in turn help provide Congress reliable information on whether the assignment of NEPA authority to states is having its intended effect. Further, while we acknowledge that other benefits of NEPA assignment may be important to states, all the NEPA assignment states we reviewed consistently identified time savings as a reason for taking on this authority. Offering guidance on evaluation methodologies to measure time savings can help FHWA ensure that additional states interested in NEPA authority for this reason are basing decisions to participate on reliable information. We are sending copies of this report to interested congressional committees, the Secretary of the Department of Transportation, and other interested parties. In addition, this report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or flemings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. Appendix I: Available Information about the Number, Percentage, and Costs of NEPA Reviews for Highway and Transit Projects Based on 2009 data, we previously reported that 96 percent of environmental reviews are completed through categorical exclusions and a smaller number of highway projects undergo EISs and environmental assessments, 1 and 3 percent respectively. We have previously reported that government-wide data on the cost of NEPA reviews are not readily available because agencies do not routinely track the cost of completing NEPA reviews and there is no government-wide mechanism to do so. To comply with congressional reporting requirements, FHWA maintains the Project and Program Action Information (PAPAI) system, which is a monitoring database that tracks projects’ NEPA review progress at major milestones. FHWA developed PAPAI in 2013 in response to statutory reporting requirements on NEPA time frames. PAPAI tracks EIS and environmental assessment start and end dates, among other information, allowing FHWA to track the processing time for these reviews. FTA does not have a similar monitoring system that tracks NEPA reviews, but has developed a new grant management system, the Transit Award Management System (TrAMS), which FTA also uses to track EIS and environmental assessment start and end dates. However, FTA officials told us that TrAMS is still in the early phases of deployment and may contain incomplete information on NEPA time frames on transit projects. Highway Projects While some information is available on the number of NEPA reviews (i.e., NEPA review time frames) for highway projects, little to no information is known about the percentage breakdown of the three types of NEPA reviews that have been conducted for these projects and their associated costs. Number of NEPA Reviews: Some information is available regarding the number of EIS and environmental assessments; however, less is known about the number of categorical exclusions. In an October 2017 report to Congress, FHWA stated that 29 EISs were initiated since 2012, of which 3 EISs were completed and 26 EISs remain active. In its October 2013 report to Congress and consistent with MAP-21 reporting requirements, FHWA reported the number of EISs that state DOTs “initiated” from 2002 through 2012. In this report, FHWA stated that the number of EISs that initiated decreased over time. For example, FHWA reported that 38 EISs were initiated in fiscal year 2002 compared to 15 EISs that were initiated in 2012. Regarding the number of environmental assessments state DOTs conduct for highway projects, FHWA’s October 2017 report to Congress stated 232 environmental assessments were initiated since 2012, of which 103 environmental assessments were completed and 129 environmental assessments remain active. FHWA’s October 2013 report to Congress did not report on the number of environmental assessments. FHWA officials told us that prior to fiscal year 2013, FHWA division offices were not required to submit data on environmental assessments. While some information on categorical exclusions exists, the total number of categorical exclusions is unknown. FHWA does not actively track categorical exclusions because state DOTs process most categorical exclusions without involvement from FHWA, as allowed by established programmatic agreements. Percentage of NEPA Reviews by Type: The percentage breakdown of EIS, environmental assessments, and categorical exclusions conducted by state DOTs for federal-aid highway projects is largely unknown since FHWA has systematically collected numerical data only on EIS reviews and environmental assessments since fiscal year 2013. We previously reported that, FHWA estimated that approximately 96 percent of NEPA reviews were categorical exclusions, 3 percent were environmental assessments, and 1 percent were EISs. While the current percentage breakdown of NEPA reviews is not known, FHWA officials told us that categorical exclusions still constitute the vast majority of NEPA reviews for highway projects. Furthermore, highway projects requiring an EIS likely remain the smallest portion of all projects and are likely to be high-profile, complex, and expensive. Costs of NEPA Reviews: The costs of completing NEPA reviews are unknown according to officials we interviewed. Officials from FHWA and the National Association of Environmental Professionals believe that data on the cost of processing NEPA reviews do not exist and are not tracked. In our survey of state DOTs, we found that a majority (37 of the 52 state DOTs surveyed) do not collect cost data. For example, officials from Virginia DOT stated that they do not track NEPA costs and that compiling this information would be difficult and labor- intensive. Transit Projects Number and Percentage of NEPA Reviews: FTA has some data on the number of categorical exclusions that transit agencies process, but has just begun to collect data on the number of EIS reviews or environmental assessments. According to an August 2016 report, FTA reported that 24,426 categorical exclusions were processed for 6,804 projects between February 2013 and September 2015. However, the same report cited a number of limitations and challenges with the underlying data, and as a result, the data may not be accurate. FTA officials told us that its new internal grant management system, TrAMS, also has the capability to track EIS reviews and environment assessments, but they are in the early stages of collecting this information. Given that data on the number of NEPA reviews are either not available (EIS and environmental assessments) or potentially unreliable (categorical exclusions), data on the percentage of NEPA reviews are also not available. However, FTA officials believe that similar to highway projects, the most common type of NEPA reviews that transit agencies process are categorical exclusions. Costs of NEPA Reviews: FTA and transit agencies do not track costs of processing NEPA reviews for transit projects. According to FTA and our previously issued work, separating out the costs for NEPA reviews (versus “planning” costs or “preliminary design” costs) within the project delivery process would be difficult to determine. Appendix II: Objectives, Scope, and Methodology Our work focused on federal-aid highway and transit projects and the provisions included in the past three surface transportation reauthorizations that are intended to accelerate the delivery of such projects (i.e., project delivery provisions). In particular, this report: (1) identifies the provisions aimed at accelerating the delivery of highway and transit projects that were included in the last three surface transportation reauthorizations; (2) examines the extent to which the provisions were used by state departments of transportation (state DOT) and transit agencies and the provisions’ reported effects, if any, on accelerating the delivery of projects; and (3) evaluates the extent to which DOT has assigned National Environmental Policy Act of 1969 (NEPA) authority to states and the reported effects. In addition, in appendix I, we identify available information on the number and percentage of the different types of NEPA reviews, and costs of conducting NEPA reviews. To identify all relevant project delivery provisions, we reviewed language in the three most recent surface transportation reauthorizations and included those provisions with the goal to accelerate the delivery of federal-aid highway or transit projects. The three reauthorizations we reviewed are as follows: the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU)—the seven project delivery provisions we used were derived from provisions we had previously identified from SAFETEA-LU, Title VI, on Transportation Planning and Project Delivery; the Moving Ahead for Progress in the 21st Century Act (MAP-21), Division A, Title 1, Subtitle C, entitled Acceleration of Project Delivery (Sections 1301 through 1323); and the Fixing America’s Surface Transportation Act (FAST Act), Division A, Title 1, Subtitle C, entitled Acceleration of Project Delivery (Sections 1301 through 1318). One provision (MAP-21 §1318(a)-(c)) included statutory language directing the Department of Transportation (DOT) to develop additional project delivery provisions through rulemaking. Accordingly, we reviewed the DOT regulations promulgated in response to that requirement (23 C.F.R. §§ 771.117(c)(24)-(30), 771.118(c)(14)-(16), 771.118(d)(7)-(8) and identified 12 additional project delivery provisions. We combined provisions that were modified in later statutory language and did not specify between different versions of the provisions, as this precision was not necessary for our purposes. For example, the 150-Day Statute of Limitations provision was created in SAFETEA-LU (section 6002) as a 180-day statute of limitations and amended in MAP-21 (section 1308) to 150 days, which is the version we used. We also grouped the provisions into categories for ease of understanding; determined if provisions were applicable to highway projects or transit projects, or both; and specified if provisions were required or optional, based on professional judgement and legal review. We define “required” provisions to mean that federal agencies or state or local transportation agencies that are subject to a provision must adhere to the requirements and obligations in the provision, if all the conditions for its use have been satisfied. We define “optional” provisions to mean that the relevant entity (a federal agency or state or local transportation agency) can choose to use the provision if circumstances allow. We met with officials from the Federal Highway Administration (FHWA) and the Federal Transit Administration (FTA) to confirm that we had a complete list of project delivery provisions for highway and transit projects. To determine states’ awareness, use, and perceived effects of the project delivery provisions on highway projects over the previous 5 years, we surveyed state DOTs within all 50 states, the District of Columbia, and Puerto Rico. We directed the survey to officials in state DOTs that oversee environmental compliance for highway projects under NEPA. Because these officials do not have responsibilities with respect to three Advance Planning category’s provisions that allow certain activities to occur prior to the completion of a NEPA review, we excluded these project delivery provisions from the survey. We also excluded two provisions from the survey that are related to DOT assignment of federal NEPA authority, because their use requires a written agreement between FHWA and state DOTs, and we addressed those provisions separately through interviews with states that have such written agreements in place. Our survey response rate was 100 percent. In order to ensure that respondents would interpret our questions as intended, prior to administering the survey, we conducted pretests with state DOTs in four states: Georgia, Ohio, Texas, and Washington. In each pretest, we conducted a session with state DOT officials during which the officials completed the survey and then provided feedback on the clarity of the questions. Based on the feedback, we refined some questions and restructured parts of the survey. After the four pretests were completed, we provided a draft copy of the survey to FHWA and the American Association of State Highway and Transportation Officials (AASHTO) for their review and comment. Both provided technical comments that we incorporated, as appropriate. Based on early interviews with highway project stakeholders and our pretests, we determined that the survey should be sent to environmental officials at the state DOTs. Additional information about our survey methodology includes the following: To determine whom we should send the pretest and survey to (i.e., the survey respondent), we used a list of environmental officials at the state DOTs compiled by AASHTO. We took steps, such as sending early notification e-mails, to help ensure that the list of respondents we created was accurate. We launched our survey on March 7, 2017. We sent e-mail reminders and telephoned survey respondents who had not completed the survey after two weeks, urging them to do so as soon as possible. We reviewed survey responses for omissions and analyzed the information provided. The survey and aggregated responses—with the exception of open-ended responses and information that would identify individual state DOTs—are provided in appendix IV. For each of the provisions included on the survey, we included references to legal citations in order to minimize confusion among provisions or versions of provisions. We provided space in the survey for respondents to provide optional comments for each individual provision and for each category of provisions. We analyzed these comments primarily for additional context and as a source of illustrative examples. Because all state DOTs were included in our survey, our analyses are not subject to sampling errors. However the practical difficulties of conducting any survey may introduce non-sampling errors. For example, differences in how a particular question is interpreted or the sources of information available to respondents can introduce errors into the survey results. We included steps both in the data collection and data analysis stages, including pretesting, to minimize such non- sampling errors. We also sent a draft of the questionnaire to FHWA and AASHTO for review and comment. We examined the survey results, reviewed survey responses during follow-up interviews with selected states, and performed computer analyses to identify inconsistencies and other indications of error and addressed such issues, where necessary. A second, independent analyst checked the accuracy of all computer analyses to minimize the likelihood of errors in data processing. Based on the survey results, we conducted follow-up interviews with officials from 10 state DOTs to discuss their views about the effects the project delivery provisions had on the duration of highway projects in their states in the past 5 years. We did not independently verify state DOT officials’ estimates of time savings. We selected state DOTs that reported a range of use and effects of the provisions; we also selected geographically diverse states. The 10 states we selected were Arizona, California, Colorado, Illinois, Maine, Minnesota, Mississippi, Texas, Virginia, and Wyoming. We also asked these state DOTs about their use and experiences with the three Advance Planning category’s provisions we excluded from the survey. These interviews are not generalizable to all states but provide additional context for responses. To determine transit agencies’ awareness, use, and views about the effects of the project delivery provisions applicable to transit, we selected a non-generalizable sample of 11 transit agencies, provided a “checklist” of the provisions to the officials regarding their awareness and use of the provisions, and interviewed officials at those agencies that oversee NEPA reviews for transit projects. We selected these agencies based primarily on the number of times they issued a notice of intent to prepare an EIS in the Federal Register from 2005 through 2016 to identify those transit agencies that may have experience preparing EISs or some another NEPA review and experience using transit project delivery provisions. While notices of intent to prepare an EIS do not always result in a transit agency’s conducting an actual EIS review, they indicate instances in which a transit agency plans to conduct an EIS review. Other factors, such as ridership and geographic location, were also considered to select the 11 transit agencies. We identified contacts for the transit agencies by calling the transit agencies’ Planning and Environmental Review departments and identifying individuals that had experience with environmental reviews and project delivery provisions. We interviewed officials at the following transit agencies: Capital Metro (Austin, Texas), Chicago Transit Authority, Houston Metropolitan Transit Authority, Los Angeles County Metropolitan Transportation Authority, Metropolitan Atlanta Rapid Transit Authority, Sacramento Regional Transit District, San Francisco Bay Area Water Emergency Transportation Authority, San Francisco Municipal Transportation Agency, Sound Transit (Seattle, Washington), Southeastern Pennsylvania Transportation Authority, and Tri-County Metropolitan Transportation District of Oregon. Similar to the survey we provided to state DOTs regarding highway projects, we provided the transit agencies with a “checklist” of the provisions in which the transit agency officials indicated whether they had heard of and used the provisions. To understand why the provisions may not be used by selected transit agencies, we also examined the frequency in which transit agencies filed a notice of intent to prepare an EIS in the Federal Register. After discussions with FTA, we used the number of times transit agencies filed a notice of intent to prepare an EIS as a proxy because agencies that have performed multiple EISs, which are typically complex in nature, are more likely to use the provisions and be able to offer insight. Transit agencies may also have experience using provisions related to categorical exclusions since transit agencies process their NEPA reviews more commonly using categorical exclusions. However, we did not examine the extent to which categorical exclusions are used by transit agencies as a proxy to identify agencies that have experience using the provisions in part because FTA’s current database, TrAMS, does not have comprehensive data on categorical exclusions. We discussed transit agency officials’ views about the effects of the provisions during our interviews. These interviews are not generalizable to all transit agencies but provide anecdotal information and context. To evaluate the extent that DOT has assigned NEPA authority to states and the effects states have reported from assuming NEPA authority, we identified states that have assumed NEPA authority based on information from FHWA: Alaska, California, Florida, Ohio, Texas, and Utah. We did not include Alaska in our review because that state did not assume NEPA authority until November 2017. For the five states we reviewed, we interviewed state DOT officials and reviewed relevant documentation including memorandums of understanding and analyses the state DOTs conducted on NEPA assignment authority, such as methodologies for calculating NEPA assignment time savings. We also surveyed the state DOTs that have not yet sought NEPA authority to assess their interest in assuming NEPA authority. In addition, we interviewed FHWA officials about procedures to oversee the performance of NEPA assignment states and interviewed FHWA division officials from those states. We compared FHWA’s procedures to oversee NEPA assignment states against standards for information and communication contained in Standards for Internal Control in the Federal Government. To determine available information on the number and percentage of the different NEPA reviews and costs of conducting NEPA reviews for highway and transit projects, we reviewed relevant publications, obtained documents and analyses from federal agencies, and interviewed federal officials and individuals from professional associations with expertise in conducting NEPA analyses. We also included a question on costs of conducting NEPA reviews in the survey we administered to state DOTs. For all objectives, we interviewed agency officials and stakeholders involved in highway and transit projects from FHWA and FTA headquarters and transportation industry and environmental organizations that are familiar with project delivery and environmental review. We conducted this performance audit from August 2016 to January 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: Project Delivery Provisions Included in the Three Most Recent Federal Transportation Reauthorization Acts That Apply to Highway and Transit Projects Appendix IV: Highway Questionnaire and Summarized Responses This appendix provides a copy of the survey sent to state departments of transportation in all 50 states, the District of Columbia, and Puerto Rico concerning their use of the project delivery provisions for highway projects. The appendix also includes the responses received for each of the provisions; it does not include information on non-responses, which resulted either from the survey’s skip patterns or from state officials voluntarily declining to respond. GAO also developed names for the provisions in the survey; we subsequently modified the names of several of the provisions for the text of our report to make them more intuitive for readers. The following list matches the provisions that have different names in our report than in the survey. Report Name • Categorical Exclusion for Projects within the Report Name • Procedures for Initiation of Environmental Review Appendix V: Transit Agency Provisions Checklist and Responses Regarding Awareness and Use Provision Description Authorizes the lead agency of a multimodal project to apply categorical exclusions from the NEPA implementing regulations or procedures of a cooperating DOT operating administration. Designates the repair or reconstruction of any road, highway, or bridge that was damaged by an emergency as a categorical exclusion, subject to certain conditions. Designates a project within an operational right-of-way as a categorical exclusion, subject to certain conditions. Authorizes the designation of a categorical exclusion for projects receiving less than $5 million in federal funds, or less than 15 percent federal funds for a project under $30 million, subject to an annual inflation adjustment. For transit projects, designates bridge removal and bridge removal related activities, such as in-channel work, disposal of materials and debris as a categorical exclusion. For transit projects, designates preventative maintenance, including safety treatments, to culverts and channels within and adjacent to transportation right-of-way as a categorical exclusion. For transit projects, designates geotechnical and archeological investigations to provide information for preliminary design, environmental analyses, and permitting purposes as a categorical exclusion For transit projects, designates minor transportation facility realignment for rail safety reasons, such as improving vertical and horizontal alignment of railroad crossings, as a categorical exclusion. For transit projects, designates modernization or minor expansions of transit structures and facilities outside existing right-of-way, such as bridges, stations, or rail yards, as a categorical exclusion. Authorizes a historic site, park land, or refuge to be used for a transportation program or project if it is determined that “de minimis impact” would result. Bars claims seeking judicial review of a permit, license, or approval issued by a federal agency for projects unless they are filed within 150 days after publication of a notice in the Federal Register announcing the final agency action, unless a shorter time is specified in the federal law under which the judicial review is allowed. Provision Description Authorizes the lead agency for a project to use planning products, such as planning decisions, analysis, or studies, in the environmental review process of the project. Requires that any federal agency responsible for environmental review to give substantial weight to a state or metropolitan programmatic mitigation plan, if one had been developed as part of the transportation planning process, when carrying out responsibilities under NEPA or other environmental law. Allows the lead agency of a project, in order to expedite decisions, to use an errata sheet attached to a final EIS, instead of rewriting it, if the comments are minor. Also, to the maximum extent practicable, combines the final EIS and record of decision in certain cases. Authorizes the operating administrations of DOT to adopt a draft EIS, EA, or final EIS of another operating administration without recirculating the document for public review if the proposed action is substantially the same as the project considered in the document to be adopted. Establishes a 45-day limit after the notice of intent date for a lead agency to identify other agencies to participate in the environmental review process on EIS projects. To the maximum extent practicable and consistent with federal law, requires lead agencies to develop a single NEPA document to satisfy the requirements for federal approval or other federal action, including permits. Creates several requirements at the start of a project’s Section 139 environmental review process, such as 1) establishing a 45-day deadline for DOT to provide a written response to the project sponsor on initiation of the environmental review process; 2) establishing a 45-day deadline for DOT to respond to a request for designation of a lead agency; and 3) requiring the development of a checklist by the lead agency to help identify natural, cultural, and historic resources, to identify agencies and improve interagency collaboration. Authorizes the lead agency to reduce duplication, by eliminating from detailed consideration an alternative proposed in an EIS if the alternative was already proposed in a planning process or state environmental review process, subject to certain conditions. Allows a state to use its federal funds to support a federal or state agency or Indian tribe participating in the environmental review process on activities that directly contribute to expediting and improving project planning and delivery. Provision Description Establishes procedures to resolve issues between project sponsors and relevant resource agencies. At the request of a project sponsor or a governor of the state in which the project is located, requires DOT to provide additional technical assistance for a project where EIS review has taken 2 years, and establish a schedule for review completion within 4 years. Requires DOT to seek opportunities with states to enter into programmatic agreements to carry out environmental and other project reviews. Encourages early cooperation between DOT and other agencies, including states or local planning agencies, in the environmental review process to avoid delay and duplication, and suggests early coordination activities. Early coordination includes establishment of MOUs with states or local planning agencies. Limits the comments of participating agencies to subject matter areas within the special expertise or jurisdiction of the agency. Requires a coordination plan for public and agency participation in the Section 139 environmental review process within 90 days of a Notice of Intent or the initiation of an Environmental Assessment, including a schedule. Issues that are resolved by the lead agency with concurrence from stakeholders cannot be reconsidered unless there is significant new information or circumstances arise. Permits states or local transportation agencies to release requests for proposals and award design-build contracts prior to the completion of the NEPA process; however, it precludes a contractor from proceeding with final design or construction before completion of the NEPA process. Authorizes states to acquire real property interests for a project before completion of the NEPA process. Authorizes the awarding of contracts for the preconstruction services and preliminary design of a project using a competitive selection process before the completion of the NEPA process. Appendix VI: Comments from the Department of Transportation Appendix VII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Steve Cohen (Assistant Director); Brian Chung (Analyst-in-Charge); Rich Johnson; Delwen Jones; Hannah Laufe; Ethan Levy; Ned Malone; Josh Ormond; Tina Paek; Cheryl Peterson; and Joe Thompson made significant contributions to this report.
Why GAO Did This Study Since 2005, over 30 provisions have been enacted in law to speed up the delivery of highway and transit projects, mainly by streamlining the NEPA review process. NEPA requires federal agencies to evaluate the potential environmental effects of proposed projects on the human environment. These project delivery provisions included new categorical exclusions to streamline the review process, and a provision allowing DOT to assign federal NEPA approval authority to states. Congress included provisions in statute for GAO to assess the use of these provisions and whether they have accelerated project delivery. This report examines: (1) which project delivery provisions were used by state DOTs and selected transit agencies and the reported effects, and (2) the extent to which DOT has assigned NEPA authority to states and the reported effects, among other objectives. GAO surveyed all state DOTs and interviewed federal and state DOT officials and 11 selected transit agencies GAO determined were likely to have been affected by the provisions, and analyzed information from NEPA assignment states. What GAO Found The Department of Transportation's (DOT) Federal Highway Administration (FHWA) and Federal Transit Administration (FTA) are responsible for National Environmental Policy Act (NEPA) compliance on highway and transit projects. Project sponsors that receive federal funds, typically a state DOT or transit agency, develop documents necessary for NEPA compliance for FHWA and FTA to evaluate and approve. Project sponsors prepare an environmental impact statement (EIS) when a project will have a significant environmental impact, or an environmental assessment to determine if a project will have a significant impact. Projects that fit within a category of activities pre-determined to have no significant impact (such as repaving a road) can receive a categorical exclusion, and an EIS or environment assessment is generally not needed. GAO found: State DOTs and selected transit agencies reported using provisions enacted in law to speed up the delivery of highway and transit projects, and while state DOTs reported that a number of provisions they used sped up delivery of highway projects, the effects on transit projects were less clear. For example, according to GAO's survey responses, 10 of 17 provisions that mainly created new “categorical exclusions” were used by 30 or more state DOTs and generally sped up projects. The provision state DOTs and transit agencies most often reported using was one that authorizes parkland or a historic site to be used for a transportation project if that project has a minimal impact on the environment. A majority of the 11 transit agencies GAO reviewed were not clear whether provisions they used sped up project delivery because these agencies did not track how long it took projects to complete the NEPA process, among other reasons. DOT assigned NEPA authority to six states: Alaska, California, Florida, Ohio, Texas, and Utah. Under agreements with FHWA, state DOTs calculate time savings by comparing NEPA completion times before (the baseline) and after assuming the authority. Only California and Texas have reported results; California reported that it reduced EIS review time 10 years from a 16-year baseline. However, these reported time savings are questionable because the comparisons do not consider other factors, such as funding, that can affect timelines. In establishing baselines, both states have also faced challenges, such as how many and which projects to include. California reported to its legislature that its baseline may not be meaningful because of the relatively small sample of five projects, but nevertheless presents these data on its web site as evidence of “significant” time savings. FHWA does not review the states' timeliness measures and time savings estimates, but has broad authority to offer guidance and technical assistance, which can include helping states develop sound evaluation methodologies and baselines. FHWA officials stated that they provide general technical assistance, but that no state has requested help developing evaluation methodologies. Offering and providing such assistance could help ensure that states considering applying for NEPA assignment base their decisions on reliable information, and that FHWA and Congress have reliable information to assess whether NEPA assignment results in more efficient environmental reviews. What GAO Recommends FHWA should offer and provide guidance or technical assistance to NEPA assignment states on developing evaluation methodologies, including baseline time frames and timeliness measures. DOT partially concurred with the recommendation, saying it would clarify environmental review start times. GAO continues to believe further evaluation guidance is needed, as discussed in the report.
gao_GAO-19-164
gao_GAO-19-164_0
Background FEMA’s mission is to help people before, during, and after disasters. It provides assistance to those affected by emergencies and disasters by supplying immediate needs (e.g., ice, water, food, and temporary housing) and providing financial assistance grants for damage to personal or public property. FEMA also provides non-disaster assistance grants to improve the nation’s preparedness, readiness, and resilience to all hazards. FEMA accomplishes a large part of its mission through awarding grants to state, local, and tribal governments and nongovernmental entities to help communities prevent, prepare for, protect against, mitigate the effects of, respond to, and recover from disasters and terrorist attacks. As previously mentioned, for fiscal years 2005 through 2014, the agency obligated about $104.5 billion in disaster relief grants. In addition, as of April 2018, the four major disasters in 2017—hurricanes Harvey, Irma, and Maria; and the California wildfires—had resulted in over $22 billion in FEMA grants. Overview of FEMA’s Grants Management Programs and Administration The current FEMA grants management environment is highly complex with many stakeholders, IT systems, and users. Specifically, this environment is comprised of 45 active disaster and non-disaster grant programs, which are grouped into 12 distinct grant categories. For example, one program in the Preparedness: Fire category is the Assistance to Firefighters Grants (AFG) program, which provides grants to fire departments, nonaffiliated emergency medical service organizations, and state fire training academies to support firefighting and emergency response needs. As another example, the Housing Assistance grant program is in the Recovery Assistance for Individuals category and provides financial assistance to individuals and households in geographical areas that have been declared an emergency or major disaster by the President. Table 1 lists FEMA’s non-disaster and disaster-based grant categories. According to FEMA, the processes for managing these different types of grants vary because the grant programs were developed independently by at least 18 separate authorizing laws that were enacted over a 62-year period (from 1947 through 2009). The various laws call for different administrative and reporting requirements. For example, the Robert T. Stafford Disaster Relief and Emergency Assistance Act, as amended, established the statutory authority for 11 of the grant programs, such as the administration of Public Assistance and Individual Assistance grant programs after a presidentially declared disaster. The act also requires the FEMA Administrator to submit an annual report to the President and Congress covering FEMA’s expenditures, contributions, work, and accomplishments, pursuant to the act. As another example, the National Dam Safety Program Act established one of the grant programs aimed at providing financial assistance to improve dam safety. Key stakeholders in modernizing the IT grants management environment include the internal FEMA officials that review, approve, and monitor the grants awarded, such as grant specialists, program analysts, and supervisors. FEMA has estimated that it will need to support about 5,000 simultaneous internal users of its grants management systems. Other users include the grant recipients that apply for, receive, and submit reports on their grant awards; these are considered the external system users. These grant recipients can include individuals, states, local governments, Indian tribes, institutions of higher education, and nonprofit organizations. FEMA has estimated that there are hundreds of thousands of external users of its grants systems. The administration of the many different grant programs is distributed across four divisions within FEMA’s organizational structure. Figure 1 provides an overview of FEMA’s organizational structure and the divisions that are responsible for administering grants. Within three of the four divisions—Resilience, United States Fire Administration, and Office of Response and Recovery—16 different grant program offices are collectively responsible for administering the 45 grant programs. The fourth division consists of 10 regional offices that help administer grants within their designated geographical regions. For example, the Office of Response and Recovery division oversees three different offices that administer 13 grant programs that are largely related to providing assistance in response to presidentially declared disasters. Figure 2 shows the number of grant programs administered by each of the four divisions’ grant program and regional offices. In addition, appendix II lists the names of the 45 grant programs. FEMA’s OCIO is responsible for developing, enhancing, and maintaining the agency’s IT systems, and for increasing efficiencies and cooperation across the entire organization. However, we and the DHS Office of Inspector General (OIG) have previously reported that the grant programs and regional offices develop information systems independent of the OCIO and that this has contributed to the agency’s disparate IT environment. We and the DHS OIG have reported that this disparate IT environment was due, in part, to FEMA’s decentralized IT budget and acquisition practices. For example, from fiscal years 2010 through 2015, the OCIO’s budget represented about one-third of the agency’s IT budget, with the grant program offices accounting for the remaining two-thirds of that budget. In February 2018, the OIG found that FEMA had shown limited progress in improving its IT management and that many of the issues reported in prior audits remained unchanged. As such, the OIG initiated a more comprehensive audit of the agency’s IT management that is ongoing. Overview of FEMA’s Legacy Grants Management Systems FEMA has identified 10 primary legacy IT systems that support its grants management activities. According to the agency, most of these systems were developed to support specific grant programs or grant categories. Table 2 summarizes the 10 primary legacy systems. According to FEMA officials, the 10 primary grant systems are all in operation (several have been for decades) and are not interoperable. As a result, individual grant programs and regional offices have independently developed work arounds intended to address existing capability gaps with the primary systems. FEMA officials stated that while these work arounds have helped the agency partially address capability gaps with its primary systems, they are often nonstandardized processes, and introduce the potential for information security risks and errors. This environment has contributed to labor-intensive manual processes and an increased burden for grant recipients. The disparate systems have also led to poor information sharing and reporting capabilities, as well as difficulty reconciling financial data. The DHS OIG and we have previously highlighted challenges with FEMA’s past attempts to modernize its grant management systems. For example, In December 2006, the DHS OIG reported that EMMIE, an effort to modernize its grants management systems and provide a single grants processing solution, was being developed without a clear understanding and definition of the future solution. The report also identified the need to ensure crosscutting participation from headquarters, regions, and states in developing and maintaining a complete, documented set of FEMA business and system requirements. In April 2016, we found weaknesses in FEMA’s development of the EMMIE system. For example, we noted that the system was implemented without sufficient documentation of system requirements, an acquisition strategy, up-to-date cost estimate and schedule, total amount spent to develop the system, or a systems integration plan. In response to our findings and related recommendations, FEMA took action to address these issues. For example, the agency implemented a requirements management process that, among other things, provided guidance to programs on analyzing requirements to ensure that they are complete and verifiable. We reported in November 2017 that EMMIE lacked the ability to collect information on all pre-award activities and, as a result, agency officials said that they and applicants used ad hoc reports and personal tracking documents to manage and monitor the progress of grant applications. FEMA officials added that applicants often struggled to access the system and that the system was not user friendly. Due to EMMIE’s shortfalls, the agency had to develop another system in 2017 to supplement EMMIE with additional grant tracking and case management capabilities. GMM Is to Address FEMA’s Shortcomings with Grants Management FEMA initiated GMM in 2015, in part, due to EMMIE’s failed attempt to modernize the agency’s grants management environment. The program is intended to modernize and streamline the agency’s grants management environment. To help streamline the agency’s grants management processes, the program established a standard framework intended to represent a common grants management lifecycle. The framework consists of five sequential phases—pre-award, award, post-award, closeout, and post- closeout—along with a sixth phase dedicated to continuous grant program management activities, such as analyzing data and producing reports on grant awards and managing IT systems. FEMA also established 43 distinct business functions associated with these six lifecycle phases. Figure 3 provides the general activities that may occur in each of the grant lifecycle phases, but specific activities would depend on the type of grant being administered (i.e., disaster versus non-disaster). GMM is expected to be implemented within the complex IT environment that currently exists at FEMA. For example, the program is intended to replace the 10 legacy grants management systems, and potentially many additional subsystems, with a single IT system. Each of the 10 legacy systems was developed with its own database(s) and with no standardization of the grants management data and, according to FEMA officials, this legacy data has grown significantly over time. Accordingly, FEMA will need to migrate, analyze, and standardize the grants management data before transitioning it to GMM. The agency awarded a contract in June 2016 to support the data migration efforts for GMM. The agency also implemented a data staging environment in October 2017 to migrate the legacy data and identify opportunities to improve the quality of the data. Further, the GMM system is expected to interface with a total of 38 other systems. These include 19 systems external to DHS (e.g., those provided by commercial entities or other federal government agencies) and 19 systems internal to DHS or FEMA. Some of the internal FEMA systems are undergoing their own modernization efforts and will need to be coordinated with GMM, such as the agency’s financial management systems, national flood insurance systems, and enterprise data warehouses. For example, FEMA’s Financial Systems Modernization Program was originally expected to deliver a new financial system in time to interface with GMM. However, the financial modernization has been delayed until after GMM is to be fully implemented; thus, GMM will instead need to interface with the legacy financial system. As a result, GMM is in the process of removing one of its key performance parameters in the acquisition program baseline related to financial systems interoperability and timeliness of data exchanged. In May 2017, DHS approved the acquisition program baseline for GMM. The baseline estimated the total lifecycle costs to be about $251 million, initial operational capability to be achieved by September 2019, and full operational capability to be achieved by September 2020. GMM’s Agile Software Development and Acquisition Approach FEMA intends to develop and deploy its own software applications for GMM using a combination of commercial-off-the-shelf software, open source software, and custom developed code. The agency plans to rely on an Agile software development approach. According to FEMA planning documentation, the agency plans to fully deliver GMM by September 2020 over eight Agile development increments. Agile development is a type of incremental development, which calls for the rapid delivery of software in small, short increments. Many organizations, especially in the federal government, are accustomed to using a waterfall software development model. This type of model typically consists of long, sequential phases, and differs significantly from the Agile development approach. We have previously reported that DHS has sought to establish Agile software development as the preferred method for acquiring and delivering IT capabilities. However, the department has not yet completed critical actions necessary to update its guidance, policies, and practices for Agile programs, in areas such as, developing lifecycle cost estimates, managing IT requirements, testing and evaluation, oversight at key decision points, and ensuring cybersecurity. (See appendix III for more details on the Agile software development approach.) FEMA’s acquisition approach includes using contract support to assist with the development and deployment efforts. The agency selected a public cloud environment to host the computing infrastructure. In addition, from March through July 2017, the agency used a short-term contract aimed at developing prototypes of GMM functionality for grant tracking and monitoring, case management of disaster survivors, grant reporting, and grant closeout. The agency planned to award a second development contract by December 2017 to complete the GMM system (beyond the prototypes) and to begin this work in September 2018. However, due to delays in awarding the second contract to develop the complete GMM system, in January 2018, the program extended the scope and time frames of the initial short-term prototype contract for an additional year to develop the first increment of the GMM system— referred to as the AFG pilot. On August 31, 2018, FEMA awarded the second development contract, which is intended to deliver the remaining functionality beyond the AFG pilot (i.e., increments 2 through 8). FEMA officials subsequently issued a 90-day planning task order for the Agile development contractor to define the work that needs to be done to deliver GMM and the level of effort needed to accomplish that work. However, the planning task order was paused after a bid protest was filed with GAO in September 2018. According to FEMA officials, they resumed work on the planning task order after the bid protest was withdrawn by the protester on November 20, 2018, and then the work was paused again during the partial government shutdown from December 22, 2018, through January 25, 2019. Assistance to Firefighters Grants Pilot FEMA began working on the AFG pilot—GMM’s first increment—in January 2018. This increment was intended to pilot GMM’s use of Agile development methods to replace core functionality for the AFG system (i.e., one of the 10 legacy systems).This system supports three preparedness/fire-related grant programs—Assistance to Firefighters Grants Program, Fire Prevention and Safety Grant Program, and Staffing for Adequate Fire and Emergency Response Grant Program. According to FEMA officials, the AFG system was selected as the first system to be replaced because it is costly to maintain and the DHS OIG had identified cybersecurity concerns with the system. Among the 43 GMM business functions discussed earlier in this report, FEMA officials specified 19 functions to be delivered in the AFG pilot. Figure 4 shows the planned time frames for delivering the AFG pilot in increment 1 (which consisted of four 3-month Agile development sub- increments), as of August 2018. As of August 2018, the program was working on sub-increment 1C of the pilot. In September 2018, GMM deployed its first set of functionality to a total of 19 AFG users—which included seven of 169 total internal AFG users, and 12 of more than 153,000 external AFG users. The functionality supported four of the 19 business functions that are related to the closeout of grants (i.e., the process by which all applicable administrative actions and all required work to award a grant have been completed). This functionality included tasks such as evaluation of final financial reports submitted by grant recipients and final reconciliation of finances (e.g., final disbursement to recipients and return of unobligated federal funds). According to FEMA officials, closeout functionality was selected first for deployment because it was the most costly component of the legacy AFG system to maintain, as it is an entirely manual and labor-intensive process. The remaining AFG functionality and remaining AFG users are to be deployed by the end of the AFG pilot. GMM Oversight Structure The GMM program is executed by a program management office, which is overseen by a program manager and program executive. This office is responsible for directing the day-to-day operations and ensuring completion of GMM program goals and objectives. The program office resides within the Office of Response and Recovery, which is headed by an Associate Administrator who reports to the FEMA Administrator. In addition, the GMM program executive (who is also the Regional Administrator for FEMA Region IX) reports directly to the FEMA Administrator. GMM is designated as a level 2 major acquisition, which means that it is subject to oversight by the DHS acquisition review board. The board is chaired by the DHS Undersecretary for Management and is made up of executive-level members, such as the DHS Chief Information Officer. The acquisition review board serves as the departmental executive board that decides whether to approve GMM through key acquisition milestones and reviews the program’s progress and its compliance with approved documentation every 6 months. The board approved the acquisition program baseline for GMM in May 2017 (i.e., estimated costs to be about $251 million and full operational capability to be achieved by September 2020). In addition, the program is reviewed on a monthly basis by FEMA’s Grants Management Executive Steering Group. This group is chaired by the Deputy Administrator of FEMA. Further, DHS’s Financial Systems Modernization Executive Steering Committee, chaired by the DHS Chief Financial Officer, meets monthly and is to provide guidance, oversight, and support to GMM. Cybersecurity Risk Management Framework For government organizations, including FEMA, cybersecurity is a key element in maintaining the public trust. Inadequately protected systems may be vulnerable to insider threats. Such systems are also vulnerable to the risk of intrusion by individuals or groups with malicious intent who could unlawfully access the systems to obtain sensitive information, disrupt operations, or launch attacks against other computer systems and networks. Moreover, cyber-based threats to federal information systems are evolving and growing. Accordingly, we designated cybersecurity as a government-wide high risk area 22 years ago, in 1997, and it has since remained on our high-risk list. Federal law and guidance specify requirements for protecting federal information and information systems. The Federal Information Security Modernization Act (FISMA) of 2014 requires executive branch agencies to develop, document, and implement an agency-wide cybersecurity program to provide security for the information and information systems that support operations and assets of the agency. The act also tasks NIST with developing, for systems other than those for national security, standards and guidelines to be used by all agencies to establish minimum cybersecurity requirements for information and information systems based on their level of cybersecurity risk. Accordingly, NIST developed a risk management framework of standards and guidelines for agencies to follow in developing cybersecurity programs. The framework addresses broad cybersecurity and risk management activities, including categorizing the system’s impact level; selecting, implementing, and assessing security controls; authorizing the system to operate (based on progress in remediating control weaknesses and an assessment of residual risk); and monitoring the efficacy of controls on an ongoing basis. Figure 5 provides an overview of this framework. Prior DHS OIG assessments, such as the annual evaluation of DHS’s cybersecurity program, have identified issues with FEMA’s cybersecurity practices. For example, in 2016, the OIG reported that FEMA was operating 111 systems without an authorization to operate. In addition, the agency had not created any corrective action plans for 11 of the systems that were classified as “Secret” or “Top Secret,” thus limiting its ability to ensure that all identified cybersecurity weaknesses were mitigated in a timely manner. The OIG further reported that, for several years, FEMA was consistently below DHS’s 90 percent target for remediating corrective action plans, with scores ranging from 73 to 84 percent. Further, the OIG reported that FEMA had a significant number of open corrective action plans (18,654) and that most of these plans did not contain sufficient information to address identified weaknesses. In 2017, the OIG reported that FEMA had made progress in addressing security weaknesses. For example, it reported that the agency had reduced the number of systems it was operating without an authorization to operate from 111 to 15 systems. FEMA Has Implemented Most Leading Practices for Reengineering Grants Management Business Processes and Managing IT Requirements According to GAO’s Business Process Reengineering Assessment Guide and the Software Engineering Institute’s Capability Maturity Model Integration® for Development, successful business process reengineering can enable agencies to replace their inefficient and outmoded processes with streamlined processes that can more effectively serve the needs of the public and significantly reduce costs and improve performance. Many times, new IT systems are implemented to support these improved business processes. Thus, effective management of IT requirements is critical for ensuring the successful design, development, and delivery of such new systems. These leading practices state that effective business process reengineering and IT requirements management involve, among other things, (1) ensuring strong executive leadership support for process reengineering; (2) assessing the current and target business environment and business performance goals; (3) establishing plans for implementing new business processes; (4) establishing clear, prioritized, and traceable IT requirements; (5) tracking progress in delivering IT requirements; and (6) incorporating input from end user stakeholders. Among these six selected leading practices for reengineering business processes and managing IT requirements, FEMA fully implemented four and partially implemented two of them for its GMM program. For example, the agency ensured strong senior leadership commitment to changing the way it manages its grants, took steps to assess and document its business environment and performance goals, defined initial IT requirements for GMM, took recent actions to better track progress in delivering planned IT requirements, and incorporated input from end user stakeholders. In addition, FEMA had begun planning for business process reengineering; however, it had not finalized plans for transitioning users to the new business processes. Further, while GMM took steps to establish clearly defined and prioritized IT requirements, key requirements were not always traceable. Table 3 summarizes the extent to which FEMA implemented the selected leading practices. FEMA Executive Leadership Demonstrated Strong Commitment to Reengineering Grants Management Processes According to GAO’s Business Process Reengineering Assessment Guide, the most critical factor for engaging in a reengineering effort is having strong executive leadership support to establish credibility regarding the seriousness of the effort and to maintain the momentum as the agency faces potentially extensive changes to its organizational structure and values. Without such leadership, even the best process design may fail to be accepted and implemented. Agencies should also ensure that there is ongoing executive support (e.g., executive steering committee meetings headed by the agency leader) to oversee the reengineering effort from start to finish. FEMA senior leadership consistently demonstrated its commitment and support for streamlining the agency’s grants management business processes and provided ongoing executive support. For example, one of the Administrator’s top priorities highlighted in FEMA’s 2014 through 2022 strategic plans was to strengthen grants management through innovative systems and business processes to rapidly and effectively deliver the agency’s mission. In accordance with this strategic priority, FEMA initiated GMM with the intent to streamline and modernize grants management across the agency. In addition, FEMA established the Grants Management Executive Steering Group in September 2015. This group is responsible for transforming the agency’s grants management capabilities through its evaluation, prioritization, and oversight of grants management modernization programs, such as GMM. The group’s membership consists of FEMA senior leaders from across the agency’s program and business support areas, such as FEMA regions, Individual Assistance, Public Assistance, Preparedness, Office of the Chief Financial Officer, Office of Chief Counsel, OCIO, and the Office of Policy and Program Analysis. In this group’s ongoing commitment to reengineering grants management processes, it meets monthly to review GMM’s updates, risks, and action items, as well as the program’s budget, schedule, and acquisition activities. For example, the group reviewed the status of key acquisition activities and program milestones, such as the follow-on award for the pilot contractor and the program’s initial operational capability date. The group also reviewed GMM’s program risks, such as data migration challenges (discussed later in this report) and delays in the Agile development contract award. With this continuous executive involvement, FEMA is better positioned to maintain momentum for reengineering the new grants management business processes that the GMM system is intended to support. FEMA Documented Its Current and Target Grants Management Business Processes and Performance Improvement Goals GAO’s Business Process Reengineering Assessment Guide states that agencies undergoing business process reengineering should develop a common understanding of the current environment by documenting existing core business processes to show how the processes work and how they are interconnected. The agencies should then develop a deeper understanding of the target environment by modeling the workflow of each target business process in enough detail to provide a common understanding of exactly what will be changed and who will be affected by a future solution. Agencies should also assess the performance of their current major business processes to identify problem areas that need to be changed or eliminated and to set realistically achievable, customer- oriented, and measurable business performance improvement goals. FEMA has taken steps to document the current and target grants management business processes. Specifically, The agency took steps to develop a common understanding of its grants management processes by documenting each of the 12 grant categories. For example, in 2016 and 2017, the agency conducted several nationwide user outreach sessions with representatives from FEMA headquarters, the 10 regional offices, and state and local grant recipients to discuss the grant categories and the current grants management business environment. In addition, FEMA’s Office of Chief Counsel developed a Grants Management Manual in January 2018 that outlined the authorizing laws, regulations, and agency policies for all of its grant programs. According to the Grants Management Executive Steering Group, the manual is intended to promote standardized grants management procedures across the agency. Additionally, the group expects grant program and regional offices to assess the manual against their own practices, make updates as needed, and ensure that their staff are properly informed and trained. FEMA also documented target grants management business process workflows for 18 of the 19 business functions that were notionally planned to be developed and deployed in the AFG pilot by December 2018. However, the program experienced delays in developing the AFG pilot (discussed later in this report) and, thus, deferred defining the remaining business function until the program gets closer to developing that function, which is now planned for August 2019. In addition, FEMA established measurable business performance goals for GMM that are aimed at addressing problem areas and improving grants management processes. Specifically, the agency established 14 business performance goals and associated thresholds in an October 2017 acquisition program baseline addendum, as well as 126 performance metrics for all 43 of the target grants management business functions in its March 2017 test and evaluation master plan. According to FEMA, the 14 business performance goals are intended to represent essential outcomes that will indicate whether GMM has successfully met critical, business-focused mission needs. GMM performance goals include areas such as improvements in the satisfaction level of users with GMM compared to the legacy systems and improvements in the timeliness of grant award processing. For example, one of GMM’s goals is to get at least 40 percent of users surveyed to agree or strongly agree that their grants management business processes are easier to accomplish with GMM, compared to the legacy systems. Program officials stated that they plan to work with the Agile development contractor to refine their performance goals and target thresholds, develop a plan for collecting the data and calculating the metrics, and establish a performance baseline with the legacy systems. Program officials also stated that they plan to complete these steps by September 2019—GMM’s initial operational capability date—which is when they are required to begin reporting these metrics to the DHS acquisition review board. FEMA Has Begun Planning Its Grants Management Business Process Reengineering, but Has Not Finalized Plans for Transition Activities According to GAO’s Business Process Reengineering Assessment Guide, agencies undergoing business process reengineering should (1) establish an overall plan to guide the effort (commonly referred to as an organizational change management plan) and (2) provide a common understanding for stakeholders of what to expect and how to plan for process changes. Agencies should develop the plan at the beginning of the reengineering effort and provide specific details on upcoming process changes, such as critical milestones and deliverables for an orderly transition, roles and responsibilities for change management activities, reengineering goals, skills and resource needs, key barriers to change, communication expectations, training, and any staff redeployments or reductions-in-force. The agency should develop and begin implementing its change management plan ahead of introducing new processes to ensure sufficient support among stakeholders for the reengineered processes. While FEMA has begun planning its business process reengineering activities, it has not finalized its plans or established time frames for their completion. Specifically, as of September 2018, program officials were in the process of drafting an organizational change management plan that is intended to establish an approach for preparing grants management stakeholders for upcoming changes. According to FEMA, this document is intended to help avoid uncertainty and confusion among stakeholders as changes are made to the agency’s grant programs, and ensure successful adoption of new business processes, strategies, and technologies. As discussed previously in this report, the transition to GMM will involve changes to FEMA’s disparate grants management processes that are managed by many different stakeholders across the agency. Program officials acknowledged that change management is the biggest challenge they face in implementing GMM and said they had begun taking several actions intended to support the agency’s change management activities. For example, program officials reported in October 2018 that they had recently created an executive-level working group intended to address FEMA’s policy challenges related to the standardization of grants management processes. Additionally, program officials reported that they planned to: (1) hire additional support staff focused on coordinating grants change management activities; and (2) pursue regional office outreach to encourage broad support among GMM’s decentralized stakeholders, such as state, local, and tribal territories. However, despite these actions, the officials were unable to provide time frames for completing the organizational change management plan or the additional actions. Until the plan and actions are complete, the program lacks assurance that it will have sufficient support among stakeholders for the reengineered processes. In addition, GMM did not establish plans and time frames for the activities that needed to take place prior to, during, and after the transition from the legacy AFG to GMM. Instead, program officials stated that they had worked collaboratively with the legacy AFG program and planned these details informally by discussing them in various communications, such as emails and meetings. However, this informal planning approach is not a repeatable process, which is essential to this program as FEMA plans to transition many sets of functionality to many different users during the lifecycle of this program. Program officials acknowledged that for future transitions they will need more repeatable transition planning and stated that they intend to establish such plans, but did not provide a time frame for when such changes would be made. Until FEMA develops a repeatable process, with established time frames for communicating the transition details to its customers prior to each transition, the agency risks that the transition from the legacy systems to GMM will not occur as intended. It also increases its risk that stakeholders will not support the implementation of reengineered grants management processes. GMM Took Steps to Establish Clearly Defined and Prioritized IT Requirements, but Key Requirements Were Not Always Traceable Leading practices for software development efforts state that IT requirements are to be clearly defined and prioritized. This includes, among other things, maintaining bidirectional traceability as the requirements evolve, to ensure there are no inconsistencies among program plans and requirements. In addition, programs using Agile software development are to maintain a product vision, or roadmap, to guide the planning of major program milestones and provide a high-level view of planned requirements. Programs should also maintain a prioritized list (referred to as a backlog) of narrowly defined requirements (referred to as lower-level requirements) that are to be delivered. Programs should maintain this backlog with the product owner to ensure the program is always working on the highest priority requirements that will deliver the most value to the users. The GMM program established clearly defined and prioritized requirements and maintained bidirectional traceability among the various levels of requirements: Grant lifecycle phases: In its Concept of Operations document, the program established six grants management lifecycle phases that represent the highest level of GMM’s requirements, through which it derives lower-level requirements. Business functions: The Concept of Operations document also identifies the next level of GMM requirements—the 43 business functions that describe how FEMA officials, grant recipients, and other stakeholders are to manage grants. According to program officials, the 43 business functions are to be refined, prioritized, and delivered to GMM customers iteratively. Further, for the AFG pilot, the GMM program office prioritized 19 business functions with the product owner and planned the development of these functions in a roadmap. Epics: GMM’s business functions are decomposed into epics, which represent smaller portions of functionality that can be developed over multiple increments. According to program officials, GMM intends to develop, refine, and prioritize the epics iteratively. As of August 2018, the program had developed 67 epics in the program backlog. An example of one of the epics for the AFG pilot is to prepare and submit grant closeout materials. User stories: The epics are decomposed into user stories, which convey the customers’ requirements at the smallest and most discrete unit of work that must be done within a single sprint to create working software. GMM develops, refines, and prioritizes the user stories iteratively. As of August 2018, the program had developed 1,118 user stories in the backlog. An example of a user story is “As an external user, I can log in with a username and password.” Figure 6 provides an example of how GMM’s different levels of requirements are decomposed. Nevertheless, while we found requirements to be traceable at the sprint- level (i.e., epics and user stories), traceability of requirements at the increment-level (i.e., business functions) were inconsistent among different requirements planning documents. Specifically, the capabilities and constraints document shows that five business functions are planned to be developed within sub-increment 1A, whereas the other key planning document—the roadmap for the AFG pilot—showed one of those five functions as being planned for the sub-increment 1B. In addition, the capabilities and constraints document shows that nine business functions are planned to be developed within sub-increment 1B, but the roadmap showed one of those nine functions as being planned for the sub- increment 1C. Program officials stated that they decided to defer these functions to later sub-increments due to unexpected technical difficulties encountered when developing functionality and reprioritizing functions with the product owners. While the officials updated the roadmap to reflect the deferred functionality, they did not update the capabilities and constraints document to maintain traceability between these two important requirements planning documents. Program officials stated that they learned during the AFG pilot that the use of a capabilities and constraints document for increment-level scope planning was not ideal and that they intended to change the process for how they documented planned requirements for future increments. However, program officials did not provide a time frame for when this change would be made. Until the program makes this change and then ensures it maintains traceability of increment-level requirements between requirements planning documents, it will continue to risk confusion among stakeholders about what is to be delivered. In addition, until recently, GMM’s planning documents were missing up- to-date information regarding when most of the legacy systems will be transitioned to GMM. Specifically, while the program’s planning documents (including the GMM roadmap) provided key milestones for the entire lifecycle of the program and high-level capabilities to be delivered in the AFG pilot, these documents lacked up-to-date time frames for when FEMA planned to transition the nine remaining legacy systems. For example, in May 2017, GMM drafted notional time frames for transitioning the legacy systems, including plans for AFG to be the seventh system replaced by GMM. However, in December 2017, the program decided to reprioritize the legacy systems so that AFG would be replaced first—yet this major change was not reflected in the program’s roadmap. Moreover, while AFG program officials were informed of the decision to transition the AFG program first, in June 2018 officials from other grant programs told us that they had not been informed on when their systems were to be replaced. As a result, these programs were uncertain about when they should start planning for their respective transitions. In August 2018, GMM program officials acknowledged that they were delayed in deciding the sequencing order for the legacy system transitions. Program officials stated that the delay was due to their need to factor the Agile development contractor’s perspective into these decisions; yet, at that time, the contract award had been delayed by approximately 8 months. Subsequently, in October 2018, program officials identified tentative time frames for transitioning the remaining legacy systems. Program officials stated that they determined the tentative time frames for transitioning the legacy systems based on key factors, such as mission need, cost, security vulnerabilities, and technical obsolescence, and that they had shared these new time frames with grant program officials. The officials also stated that, once the Agile contractor begins contract performance, they expect to be able to validate the contractor’s capacity and finalize these time frames by obtaining approval from the Grants Management Executive Steering Group. By taking steps to update and communicate these important time frames, FEMA should be better positioned to ensure that each of the grant programs are prepared for transitioning to GMM. GMM Recently Began Tracking Progress in Delivering Planned IT Requirements According to leading practices, Agile programs should track their progress in delivering planned IT requirements within a sprint (i.e., short iterations that produce working software). Given that sprints are very short cycles of development (e.g., 2 weeks), the efficiency of completing planned work within a sprint relies on a disciplined approach that includes using a fixed pace, referred to as the sprint cadence, that provides a consistent and predictable development routine. A disciplined approach also includes identifying by the start of a sprint which user stories will be developed, developing those stories to completion (e.g., fully tested and demonstrated to, and accepted by, the product owner), and tracking completion progress of those stories. Progress should be communicated to relevant stakeholders and used by the development teams to better understand their capacity to develop stories, continuously improve on their processes, and forecast how long it will take to deliver all remaining capabilities. The GMM program did not effectively track progress in delivering IT requirements during the first nine sprints, which occurred from January to June 2018. These gaps in tracking the progress of requirements, in part, had an impact on the program’s progress in delivering the 19 AFG business functions that were originally planned by December 2018 and are now deferred to August 2019. However, beginning in July 2018, in response to our ongoing review, the program took steps to improve in these areas. Specifically, GMM did not communicate the status of its Agile development progress to program stakeholders, such as the grant programs, the regional offices, and the development teams, during most of the first nine sprints. Program officials acknowledged that they should use metrics to track development progress and, in July 2018, they began reporting metrics to program stakeholders. For example, they began collecting and providing data on the number of stories planned and delivered, estimated capacity for development teams, and the number of days spent working on the sprint, as part of the program’s weekly status reports to program stakeholders, such as product owners. Rather than using a fixed, predictable sprint cadence, GMM allowed a variable development cadence, meaning that sprint durations varied from 1 to 4 weeks throughout the first nine sprints. Program officials noted that they had experimented with the use of a variable cadence to allow more time to complete complex technical work. Program officials stated that they realized that varying the sprints was not effective and, in July 2018 for sprint 10, they reverted back to a fixed, 2 week cadence. GMM added a significant amount of scope during its first nine sprints, after the development work had already begun. For example, the program committed to 28 user stories at the beginning of sprint eight, and then nearly doubled the work by adding 25 additional stories in the middle of the sprint. Program officials cited multiple reasons for adding more stories, including that an insufficient number of stories had been defined in the backlog when the sprint began, the realization that planned stories were too large and needed to be decomposed into smaller stories, and the realization that other work would be needed in addition to what was originally planned. Program officials recognized that, by the start of a sprint, the requirements should be sufficiently defined, such that they are ready for development without requiring major changes during the sprint. The program made recent improvements in sprints 11 and 12, which had only five stories added after the start of a sprint. By taking these steps to establish consistency among sprints, the program has better positioned itself to more effectively monitor and manage the remaining IT development work. In addition, this improvement in consistency should help the program avoid future deferments of functionality. GMM Is Involving Stakeholders and Incorporating Input Leading practices state that programs should regularly collaborate with, and collect input from, relevant stakeholders; monitor the status of stakeholder involvement; incorporate stakeholder input; and measure how well stakeholders’ needs are being met. For Agile programs, it is especially important to track user satisfaction to determine how well the program has met stakeholders’ needs. Consistent stakeholder participation ensures that the program meets its stakeholders’ needs. FEMA implemented its responsibilities in this area through several means, such as stakeholder outreach activities; development of a strategic communications plan; and continuous monitoring, solicitation, and recording of stakeholder involvement and feedback. For example, the agency conducted nationwide outreach sessions from January 2016 through August 2017 and began conducting additional outreach sessions in April 2018. These outreach sessions involved hundreds of representatives from FEMA headquarters, the 10 regional offices, and state and local grant recipients to collect information on the current grants management environment and opportunities for streamlining grants management processes. FEMA also held oversight and stakeholder outreach activities and actively solicited and recorded feedback from its stakeholders on a regular basis. For example, GMM regularly verified with users that the new functionality met their IT requirements, as part of the Agile development cycle. Additionally, we observed several GMM biweekly requirements validation sessions where the program’s stakeholders were involved and provided feedback as part of the requirements development and refinement process. In addition, FEMA identified GMM stakeholders and tracked its engagement with these stakeholders using a stakeholder register. The agency also defined processes for how the GMM program is to collaborate with its stakeholders in a stakeholder communication plan and Agile development team agreement. Also, while several officials from the selected grant program and regional offices that we interviewed indicated that the program could improve in communicating its plans for GMM and incorporating stakeholder input, most of the representatives from these offices stated that GMM is doing well at interacting with its stakeholders. Finally, in October 2018, program officials reported that they had recently begun measuring user satisfaction by conducting surveys and interviews with users that have utilized the new functionality within GMM. The program’s outreach activities, collection of stakeholder input, and measurement of user satisfaction demonstrate that the program is taking the appropriate steps to incorporate stakeholder input. FEMA Lacks a Current Cost Estimate and Reliable Schedule for GMM GMM’s Initial Cost Estimate Was Reliable, but Is Now Outdated Reliable cost estimates are critical for successfully delivering IT programs. Such estimates provide the basis for informed decision making, realistic budget formulation, meaningful progress measurement, and accountability for results. GAO’s Cost Estimating and Assessment Guide defines leading practices related to the following four characteristics of a high-quality, reliable estimate. Comprehensive. The estimate accounts for all possible costs associated with a program, is structured in sufficient detail to ensure that costs are neither omitted nor double counted, and documents all cost-influencing assumptions. Well-documented. Supporting documentation explains the process, sources, and methods used to create the estimate; contains the underlying data used to develop the estimate; and is adequately reviewed and approved by management. Accurate. The estimate is not overly conservative or optimistic, is based on an assessment of the costs most likely to be incurred, and is regularly updated so that it always reflects the program’s current status. Credible. Discusses any limitations of the analysis because of uncertainty or sensitivity surrounding data or assumptions, the estimate’s results are cross-checked, and an independent cost estimate is conducted by a group outside the acquiring organization to determine whether other estimating methods produce similar results. In May 2017, DHS approved GMM’s lifecycle cost estimate of about $251 million for fiscal years 2015 through 2030. We found this initial estimate to be reliable because it fully or substantially addressed all the characteristics associated with a reliable cost estimate. For example, the estimate comprehensively included government and contractor costs, all elements of the program’s work breakdown structure, and all phases of the system lifecycle; and was aligned with the program’s technical documentation at the time the estimate was developed. GMM also fully documented the key assumptions, data sources, estimating methodology, and calculations for the estimate. Further, the program conducted a risk assessment and sensitivity analysis, and DHS conducted an independent assessment of the cost estimate to validate the accuracy and credibility of the cost estimate. However, key assumptions that FEMA made about the program changed soon after DHS approved the cost estimate in May 2017. Thus, the initial cost estimate no longer reflects the current approach for the program. For example, key assumptions about the program that changed include: Change in the technical approach: The initial cost estimate assumed that GMM would implement a software-as-a-service model, meaning that FEMA would rely on a service provider to deliver software applications and the underlying infrastructure to run them. However, in December 2017, the program instead decided to implement an infrastructure-as-a-service model, meaning that FEMA would develop and deploy its own software application and rely on a service provider to deliver and manage the computing infrastructure (e.g., servers, software, storage, and network equipment). According to program officials, this decision was made after learning from the Agile prototypes that the infrastructure-as-a-service model would allow GMM to develop the system in a more flexible environment. Increase in the number of system development personnel: A key factor with Agile development is the number of development teams (each consisting of experts in software development, testing, and cybersecurity) that are operating concurrently and producing separate portions of software functionality. Program officials initially assumed that they would need three to four concurrent Agile development teams, but subsequently realized that they would instead need to expend more resources to achieve GMM’s original completion date. Specifically, program officials now expect they will need to at least double, and potentially triple, the number of concurrent development teams to meet GMM’s original target dates. Significant delays and complexities with data migration: In 2016 and 2017, GMM experienced various technical challenges in its effort to transfer legacy system data to a data staging platform. This data transfer effort needed to be done to standardize the data before eventually migrating the data to GMM. These challenges resulted in significant delays and cost increases. Program officials reported that, by February 2018—at least 9 months later than planned—all legacy data had been transferred to a data staging platform so that FEMA officials could begin analyzing and standardizing the data prior to migrating it into GMM. FEMA officials reported that they anticipated the cost estimate to increase, and for this increase to be high enough to breach the $251 million threshold set in GMM’s May 2017 acquisition program baseline. Thus, consistent with DHS’s acquisition guidance, the program informed the DHS acquisition review board of this anticipated breach. The board declared that the program was in a cost breach status, as of September 12, 2018. As of October 2018, program officials stated that they were in the process of revising the cost estimate to reflect the changes in the program and to incorporate actual costs. In addition, the officials stated that the program was applying a new cost estimating methodology tailored for Agile programs that DHS’s Cost Analysis Division had been developing. In December 2018, program officials stated that they had completed the revised cost estimate but it was still undergoing departmental approval. Establishing an updated cost estimate should help FEMA better understand the expected costs to deliver GMM under the program’s current approach and time frames. GMM’s Schedule Is Unreliable The success of an IT program depends, in part, on having an integrated and reliable master schedule that defines when the program’s set of work activities and milestone events are to occur, how long they will take, and how they are related to one another. Among other things, a reliable schedule provides a roadmap for systematic execution of an IT program and the means by which to gauge progress, identify and address potential problems, and promote accountability. GAO’s Schedule Assessment Guide defines leading practices related to the following four characteristics that are vital to having a reliable integrated master schedule. Comprehensive. A comprehensive schedule reflects all activities for both the government and its contractors that are necessary to accomplish a program’s objectives, as defined in the program’s work breakdown structure. The schedule also includes the labor, materials, and overhead needed to do the work and depicts when those resources are needed and when they will be available. It realistically reflects how long each activity will take and allows for discrete progress measurement. Well-constructed. A schedule is well-constructed if all of its activities are logically sequenced with the most straightforward logic possible. Unusual or complicated logic techniques are used judiciously and justified in the schedule documentation. The schedule’s critical path represents a true model of the activities that drive the program’s earliest completion date and total float accurately depicts schedule flexibility. Credible. A schedule that is credible is horizontally traceable—that is, it reflects the order of events necessary to achieve aggregated products or outcomes. It is also vertically traceable—that is, activities in varying levels of the schedule map to one another and key dates presented to management in periodic briefings are consistent with the schedule. Data about risks are used to predict a level of confidence in meeting the program’s completion date. The level of necessary schedule contingency and high-priority risks are identified by conducting a robust schedule risk analysis. Controlled. A schedule is controlled if it is updated regularly by trained schedulers using actual progress and logic to realistically forecast dates for program activities. It is compared to a designated baseline schedule to measure, monitor, and report the program’s progress. The baseline schedule is accompanied by a baseline document that explains the overall approach to the program, defines ground rules and assumptions, and describes the unique features of the schedule. The baseline schedule and current schedule are subject to a configuration management control process. GMM’s schedule was unreliable because it minimally addressed three characteristics—comprehensive, credible, and controlled—and did not address the fourth characteristic of a reliable estimate—well-constructed. One of the most significant issues was that the program’s fast approaching, final delivery date of September 2020 was not informed by a realistic assessment of GMM development activities, and rather was determined by imposing an unsubstantiated delivery date. Table 4 summarizes our assessment of GMM’s schedule. In discussing the reasons for the shortfalls in these practices, program officials stated that they had been uncertain about the level of rigor that should be applied to the GMM schedule, given their use of Agile development. However, leading practices state that program schedules should meet all the scheduling practices, regardless of whether a program is using Agile development. As discussed earlier in this report, GMM has already experienced significant schedule delays. For example, the legacy data migration effort, the AFG pilot, and the Agile development contract have been delayed. Program officials also stated that the delay in awarding and starting the Agile contract has delayed other important activities, such as establishing time frames for transitioning legacy systems. A more robust schedule could have helped FEMA predict the impact of delays on remaining activities and identify which activities appeared most critical so that the program could ensure that any risks in delaying those activities were properly mitigated. In response to our review and findings, program officials recognized the need to continually enhance their schedule practices to improve the management and communication of program activities. As a result, in August 2018, the officials stated that they planned to add a master scheduler to the team to improve the program’s schedule practices and ensure that all of the areas of concern we identified are adequately addressed. In October 2018, the officials reported that they had recently added two master schedulers to GMM. According to the statement of objectives, the Agile contractor is expected to develop an integrated master schedule soon after it begins performance. However, program officials stated that GMM is schedule-driven—due to the Executive Steering Group’s expectation that the solution will be delivered by September 2020. The officials added that, if GMM encounters challenges in meeting this time frame, the program plans to seek additional resources to allow it to meet the 2020 target. GMM’s schedule-driven approach has already led to an increase in estimated costs and resources. For example, as previously mentioned, the program has determined that, to meet its original target dates, GMM needs to at least double, and possibly triple, the number of concurrent Agile development teams. In addition, we have previously reported that schedule pressure on federal IT programs can lead to omissions and skipping of key activities, especially system testing. In August 2018, program officials acknowledged that September 2020 may not be feasible and that the overall completion time frames established in the acquisition program baseline may eventually need to be rebaselined. Without a robust schedule to forecast whether FEMA’s aggressive delivery goal for GMM is realistic to achieve, leadership will be limited in its ability to make informed decisions on what additional increases in cost or reductions in scope might be needed to fully deliver the system. FEMA Fully Addressed Three Key Cybersecurity Practices and Partially Addressed Two Others NIST’s risk management framework establishes standards and guidelines for agencies to follow in developing cybersecurity programs. Agencies are expected to use this framework to achieve more secure information and information systems through the implementation of appropriate risk mitigation strategies and by performing activities that ensure that necessary security controls are integrated into agencies’ processes. The framework addresses broad cybersecurity and risk management activities, which include the following: Categorize the system: Programs are to categorize systems by identifying the types of information used, selecting a potential impact level (e.g., low, moderate, or high), and assigning a category based on the highest level of impact to the system’s confidentiality, integrity, and availability, if the system was compromised. Programs are also to document a description of the information system and its boundaries and should register the system with appropriate program management offices. System categorization is documented in a system security plan. Select and implement security controls: Programs are to determine protective measures, or security controls, to be implemented based on the system categorization results. These security controls are documented in a system security plan. For example, control areas include access controls, incident response, security assessment and authorization, identification and authentication, and configuration management. Once controls are identified, programs are to determine planned implementation actions for each of the designated controls. These implementation actions are also specified in the system security plan. Assess security controls: Programs are to develop, review, and approve a security assessment plan. The purpose of the security assessment plan approval is to establish the appropriate expectations for the security control assessment. Programs are to also perform a security control assessment by evaluating the security controls in accordance with the procedures defined in the security assessment plan, in order to determine the extent to which the controls were implemented correctly. The output of this process is intended to produce a security assessment report to document the issues, findings, and recommendations. Programs are to conduct initial remediation actions on security controls and reassess those security controls, as appropriate. Obtain an authorization to operate the system: Programs are to obtain security authorization approval in order to operate a system. Resolving weaknesses and vulnerabilities identified during testing is an important step leading up to achieving an authorization to operate. Programs are to establish corrective action plans to address any deficiencies in cybersecurity policies, procedures, and practices. DHS guidance also states that corrective action plans must be developed for every weakness identified during a security control assessment and within a security assessment report. Monitor security controls on an ongoing basis: Programs are to monitor their security controls on an ongoing basis after deployment, including determining the security impact of proposed or actual changes to the information system and assessing the security controls in accordance with a monitoring strategy that determines the frequency of monitoring the controls. For the GMM program’s engineering and test environment, which went live in February 2018, FEMA fully addressed three of the five key cybersecurity practices in NIST’s risk management framework and partially addressed two of the practices. Specifically, FEMA categorized GMM’s environment based on security risk, implemented select security controls, and monitored security controls on an ongoing basis. However, the agency partially addressed the areas of assessing security controls and obtaining an authorization to operate the system. Table 5 provides a summary of the extent to which FEMA addressed NIST’s key cybersecurity practices for GMM’s engineering and test environment. GMM Categorized the System Based on Security Risk Consistent with NIST’s framework, GMM categorized the security risk of its engineering and test environment and identified it as a moderate- impact environment. A moderate-impact environment is one where the loss of confidentiality, integrity, or availability could be expected to have a serious or adverse effect on organizational operations, organizational assets, or individuals. GMM completed the following steps leading to this categorization: The program documented in its System Security Plan the various types of data and information that the environment will collect, process, and store, such as conducting technology research, building or enhancing technology, and maintaining IT networks. The program established three information types and assigned security levels of low, moderate, or high impact in the areas of confidentiality, availability, and integrity. A low-impact security level was assigned to two information types: (1) conducting technology research and (2) building or enhancing technology; and a moderate- impact security level was assigned to the third information type: maintaining IT networks. The engineering and test environment was categorized as an overall moderate-impact system, based on the highest security impact level assignment. GMM documented a description of the environment, including a diagram depicting the system’s boundaries, which illustrates, among other things, databases and firewalls. GMM properly registered its engineering and test environment with FEMA’s Chief Information Officer, Chief Financial Officer, and acting Chief Information Security Officer. By conducting the security categorization process, GMM has taken steps that should ensure that the appropriate security controls are selected for the program’s engineering and test environment. GMM Selected and Planned for the Implementation of Controls in Its System Security Plan Consistent with NIST’s framework and the system categorization results, GMM appropriately determined which security controls to implement and planned actions for implementing those controls in its System Security Plan for the engineering and test environment. For example, the program utilized NIST guidance to select standard controls for a system categorized with a moderate-impact security level. These control areas include, for example, access controls, risk assessment, incident response, identification and authentication, and configuration management. Further, the program documented its planned actions to implement each control in its System Security Plan. For example, GMM documented that the program plans to implement its Incident Response Testing control by participating in an agency-wide exercise and unannounced vulnerability scans. As another example, GMM documented that the program plans to implement its Contingency Plan Testing control by testing the contingency plan annually, reviewing the test results, and preparing after action reports. By selecting and planning for the implementation of security controls, GMM has taken steps to mitigate its security risks and protect the confidentiality, integrity, and availability of the information system. GMM Developed a Security Assessment Plan, but It Lacked Essential Details and Approvals Consistent with NIST’s framework, in January 2018, GMM program officials developed a security assessment plan for the engineering and test environment. According to GMM program officials, this plan was reviewed by the security assessment team. However, the security assessment plan lacked essential details. Specifically, while the plan included the general process for evaluating the environment’s security controls, the planned assessment procedures for all 964 security controls were not sufficiently defined. Specifically, GMM program officials copied example assessment procedures from NIST guidance and inserted them into its security assessment documentation for all of its 964 controls, without making further adjustments to explain the steps that should be taken specific to GMM. Table 6 shows an example of a security assessment procedure copied from the NIST guidance that should have been further adjusted for GMM. In addition, the actual assessment procedures that the GMM assessors used to evaluate the security controls were not documented. Instead, the program only documented whether each control passed or failed each test. GMM program officials stated that the planned assessment procedures are based on an agency template that was exported from a DHS compliance tool, and that FEMA security officials have been instructed by the DHS OCIO not to tailor or make any adjustments to the template language. However, the assessment procedures outlined in NIST’s guidance are to serve as a starting point for organizations preparing their program specific assessments. According to NIST, organizations are expected to select and tailor their assessment procedures for each security control from NIST’s list of suggested assessment options (e.g., review, analyze, or inspect policies, procedures, and related documentation options). DHS OCIO officials stated that, consistent with NIST’s guidance, they expect that components will ensure they are in compliance with the minimum standards and will also add details and additional rigor, as appropriate, to tailor the planned security assessment procedures to fit their unique missions or needs. In November 2018, in response to our audit, DHS OCIO officials stated that they were meeting with FEMA OCIO officials to understand why they did not document the planned and actual assessment procedures performed by the assessors for GMM. Until FEMA ensures that detailed planned evaluation methods and actual evaluation procedures specific to GMM are defined, the program risks assessing security controls incorrectly, having controls that do not work as intended, and producing undesirable outcomes with respect to meeting the security requirements. In addition, the security assessment plan was not approved by FEMA’s OCIO before proceeding with the security assessment. Program officials stated that approval was not required for the security assessment plan prior to the development of the security assessment report. However, NIST guidance states that the purpose of the security assessment plan approval is to establish the appropriate expectations for the security control assessment. By not getting the security assessment plan approved by FEMA’s OCIO before security assessment reviews were conducted, GMM risks inconsistencies with the plan and security objectives of the organization. Finally, consistent with NIST guidance, GMM performed a security assessment in December 2017 of the engineering and test environment’s controls, which identified 36 vulnerabilities (23 critical- and high-impact vulnerabilities and 13 medium- and low-impact vulnerabilities). The program also documented these vulnerabilities and associated findings and recommendations in a security assessment report. GMM conducted initial remediation actions (i.e., remediation of vulnerabilities that should be corrected immediately) for 12 of the critical- and high-impact vulnerabilities and a reassessment of those security controls confirmed that they were resolved by January 2018. Remediation of the remaining 11 critical- and high-impact vulnerabilities and 13 medium- and low- impact vulnerabilities were to be addressed by corrective action plans as part of the authorization to operate process, which is discussed in the next section. GMM Obtained Authorization to Operate, but Had Not Addressed Known Vulnerabilities or Tested All Controls The authorization to operate GMM’s engineering and test environment was granted on February 5, 2018. Among other things, this decision was based on the important stipulation that the remaining 11 critical- and high- impact vulnerabilities associated with multifactor authentication would be addressed within 45 days, or by March 22, 2018. However, the program did not meet this deadline and, instead, approximately 2 months after this deadline passed, obtained a waiver to remediate these vulnerabilities by May 9, 2019. These vulnerabilities are related to a multifactor authentication capability. Program officials stated that they worked with FEMA OCIO officials to attempt to address these vulnerabilities by the initial deadline, but they were unsuccessful in finding a viable solution. Therefore, GMM program officials developed a waiver at the recommendation of the OCIO to provide additional time to develop a viable solution. However, a multifactor authentication capability is essential to ensuring that users are who they say they are, prior to granting users access to the GMM engineering and test environment, in order to reduce the risk of harmful actors accessing the system. In addition, as of September 2018, the program had not established corrective action plans for the 13 medium- and low-impact vulnerabilities. Program officials stated that they do not typically address low-impact vulnerabilities; however, this is in conflict with DHS guidance that specifies that corrective action plans must be developed for every weakness identified during a security control assessment and within a security assessment report. In response to our audit, in October 2018, GMM program officials developed these remaining corrective action plans. The plans indicated that these vulnerabilities were to be fully addressed by January 2019 and April 2019. While the program eventually took corrective actions in response to our audit by developing the missing plans, the GMM program initially failed to follow DHS’s guidance on preparing corrective actions plans for all security vulnerabilities. Until GMM consistently follows DHS’s guidance, it will be difficult for FEMA to determine the extent to which GMM’s security weaknesses identified during its security control assessments are remediated. Additionally, as we have reported at other agencies, vulnerabilities can be indicators of more significant underlying issues and, thus, without appropriate management attention or prompt remediation, GMM is at risk of unnecessarily exposing the program to potential exploits. Moreover, GMM was required to assess all untested controls by March 7, 2018, or no later than 30 days after the approval of the authorization to operate; however, it did not meet this deadline. Specifically, we found that, by October 2018, FEMA had not fully tested 190 security controls in the GMM engineering and test environment. These controls were related to areas such as security incident handling and allocation of resources required to protect an information system. In response to our findings, in October 2018, GMM program officials reported that they had since fully tested 27 controls and partially tested the remaining 163 controls. Program officials stated that testing of the 163 controls is a shared responsibility between GMM and other parties (e.g., the cloud service provider). They added that GMM had completed its portion of the testing but was in the process of verifying the completion of testing by other parties. Program officials stated that the untested controls were not addressed sooner, in part, because of errors resulting from configuration changes in the program’s compliance tool during a system upgrade, which have now been resolved. Until GMM ensures that all security controls have been tested, it remains at an increased risk of exposing programs to potential exploits. GMM Is Using Processes for Monitoring Controls Consistent with the NIST framework, GMM established methods for assessing and monitoring security controls to be conducted after an authorization to operate has been approved. GMM has tailored its cybersecurity policies and practices for monitoring its controls to take into account the frequent and iterative pace with which system functionality is continuously being introduced into the GMM environment. Specifically, the GMM program established a process for assessing security impact changes to the system and conducting reauthorizations to operate within the rapid Agile delivery environment. As part of this process, GMM embedded cybersecurity experts on each Agile development team so that they are involved early and can impact security considerations from the beginning of requirements development through testing and deployment of system functionality. In addition, the process involves important steps for ensuring that the system moves from development to completion, while producing a secure and reliable system. For example, it includes procedures for creating, reviewing, and testing new system functionality. As the new system functionality is integrated with existing system functionality, it is to undergo automated testing and security scans in order to ensure that the integrity of the security of the system has not been compromised. Further, an automated process is to deploy the code if it passes all security scans, code tests, and code quality checks. GMM’s process for conducting a reauthorization to operate within the rapid delivery Agile development environment is to follow FEMA guidance that states that all high-level changes made to a FEMA IT system must receive approval from both a change advisory board and the FEMA Chief Information Officer. The board and FEMA Chief Information Officer are to focus their review and approval on scheduled releases and epics (i.e., collections of user stories). Additionally, the Information System Security Officer is to review each planned user story and, if it is determined that the proposed changes may impact the integrity of the authorization, the Information System Security Officer is to work with the development team to begin the process of updating the system authorization. Finally, GMM uses automated tools to track the frequency in which security controls are assessed and to ensure that required scanning data are received by FEMA for reporting purposes. Program officials stated that, in the absence of department-level and agency-level guidance, they have coordinated with DHS and FEMA OCIO officials to ensure that these officials are in agreement with GMM’s approach to continuous monitoring. By having monitoring control policies and procedures in place, FEMA management is positioned to more effectively prioritize and plan its risk response to current threats and vulnerabilities for the GMM program. Conclusions Given FEMA’s highly complex grants management environment, with its many stakeholders, IT systems, and internal and external users, implementing leading practices for business process reengineering and IT requirements management is critical for success. FEMA has taken many positive steps, including ensuring executive leadership support for business process reengineering, documenting the agency’s grants management processes and performance improvement goals, defining initial IT requirements for the program, incorporating input from end user stakeholders into the development and implementation process, and taking recent actions to improve its delivery of planned IT requirements. Nevertheless, until the GMM program finalizes plans and time frames for implementing its organizational change management actions, plans and communicates system transition activities, and maintains clear traceability of IT requirements, FEMA will be limited in its ability to provide streamlined grants management processes and effectively deliver a modernized IT system to meet the needs of its large range of users. While GMM’s initial cost estimate was reliable, key assumptions about the program since the initial estimate had changed and, therefore, it no longer reflected the current approach for the program. The forthcoming updated cost schedule is expected to better reflect the current approach. However, the program’s unreliable schedule to fully deliver GMM by September 2020 is aggressive and unrealistic. The delays the program has experienced to date further compound GMM’s schedule issues. Without a robust schedule that has been informed by a realistic assessment of GMM’s development activities, leadership will be limited in its ability to make informed decisions on what additional increases in cost or reductions in scope might be needed to achieve their goals. Further, FEMA’s implementation of cybersecurity practices for GMM in the areas of system categorization, selection and implementation, and monitoring will help the program. However, GMM lacked essential details for evaluating security controls, did not approve the security assessment plan before proceeding with the security assessment, did not follow DHS’s guidance to develop corrective action plans for all security vulnerabilities, and did not fully test all security controls. As a result, the GMM engineering and test environment remains at an increased risk of exploitations. Recommendations for Executive Action We are making eight recommendations to FEMA: The FEMA Administrator should ensure that the GMM program management office finalizes the organizational change management plan and time frames for implementing change management actions. (Recommendation 1) The FEMA Administrator should ensure that the GMM program management office plans and communicates its detailed transition activities to its affected customers before they transition to GMM and undergo significant changes to their processes. (Recommendation 2) The FEMA Administrator should ensure that the GMM program management office implements its planned changes to its processes for documenting requirements for future increments and ensures it maintains traceability among key IT requirements documents. (Recommendation 3) The FEMA Administrator should ensure that the GMM program management office updates the program schedule to address the leading practices for a reliable schedule identified in this report. (Recommendation 4) The FEMA Administrator should ensure that the FEMA OCIO defines sufficiently detailed planned evaluation methods and actual evaluation methods for assessing security controls. (Recommendation 5) The FEMA Administrator should ensure that the FEMA OCIO approves a security assessment plan before security assessment reviews are conducted. (Recommendation 6) The FEMA Administrator should ensure that the GMM program management office follows DHS guidance on preparing corrective action plans for all security vulnerabilities. (Recommendation 7) The FEMA Administrator should ensure that the GMM program management office fully tests all of its security controls for the system. (Recommendation 8) Agency Comments and Our Evaluation DHS provided written comments on a draft of this report, which are reprinted in appendix IV. In its comments, the department concurred with all eight of our recommendations and provided estimated completion dates for implementing each of them. For example, with regard to recommendation 4, the department stated that FEMA plans to update the GMM program schedule to address the leading practices for a reliable schedule by April 30, 2019. In addition, for recommendation 7, the department stated that FEMA plans to ensure that corrective action plans are prepared by July 31, 2019, to address all identified security vulnerabilities for GMM. If implemented effectively, the actions that FEMA plans to take in response to the recommendations should address the weaknesses we identified. We also received technical comments from DHS and FEMA officials, which we incorporated, as appropriate. We are sending copies of this report to the Secretary of Homeland Security and interested congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4456 or harriscc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology Our objectives were to (1) determine the extent to which the Federal Emergency Management Agency (FEMA) is implementing leading practices for reengineering its grants management business processes and incorporating business needs into Grants Management Modernization (GMM) information technology (IT) requirements; (2) assess the reliability of the program’s estimated costs and schedule; and (3) determine the extent to which FEMA is addressing key cybersecurity practices for GMM. To address the first objective, we reviewed GAO’s Business Process Reengineering Assessment Guide and Software Engineering Institute’s Capability Maturity Model for Integration for Development to identify practices associated with business process reengineering and IT requirements management. We then selected six areas that, in our professional judgment, represented foundational practices that were of particular importance to the successful implementation of an IT modernization effort that is using Agile development processes. We also selected the practices that were most relevant based on where GMM was in the system development lifecycle and we discussed the practice areas with FEMA officials. The practices are: Ensuring executive leadership support for process reengineering Assessing the current and target business environment and business Establishing plans for implementing new business processes Establishing clear, prioritized, and traceable IT requirements Tracking progress in delivering IT requirements Incorporating input from end user stakeholders We also reviewed selected chapters of GAO’s draft Agile Assessment Guide (Version 6A), which is intended to establish a consistent framework based on best practices that can be used across the federal government for developing, implementing, managing, and evaluating agencies’ IT investments that rely on Agile methods. To develop this guide, GAO worked closely with Agile experts in the public and private sector; some chapters of the guide are considered more mature because they have been reviewed by the expert panel. We reviewed these chapters to ensure that our expectations for how FEMA should apply the six practices for business process reengineering and IT requirements management are appropriate for an Agile program and are consistent with the draft guidance that is under development. Additionally, since Agile development programs may use different terminology to describe their software development processes, the Agile terms used in this report (e.g., increment, sprint, epic, etc.) are specific to the GMM program. We obtained and analyzed FEMA grants management modernization documentation, such as current and target grants management business processes, acquisition program baseline, operational requirements document, concept of operations, requirements analyses workbooks, Grants Management Executive Steering Group artifacts, stakeholder outreach artifacts, Agile increment- and sprint-level planning and development artifacts, and the requirements backlog. We assessed the program documentation against the selected practices to determine the extent to which the agency had implemented them. We then assessed each practice area as: fully implemented—FEMA provided complete evidence that showed it fully implemented the practice area; partially implemented—FEMA provided evidence that showed it partially implemented the practice area; not implemented—FEMA did not provide evidence that showed it implemented any of the practice area. Additionally, we observed Agile increment and sprint development activities at GMM facilities in Washington, D.C. We also observed a demonstration of how the program manages its lower level requirements (i.e., user stories and epics) and maintains traceability of the requirements using an automated tool at GMM facilities in Washington, D.C. We also interviewed FEMA officials, including the GMM Program Executive, GMM Program Manager, GMM Business Transformation Team Lead, and Product Owner regarding their efforts to streamline grants management business processes, collect and incorporate stakeholder input, and manage GMM’s requirements. In addition, we interviewed FEMA officials from four out of 16 grant program offices and two out of 10 regional offices to obtain contextual information and illustrative examples of FEMA’s efforts to reengineer grants management business processes and collect business requirements for GMM. Specifically, We selected the four grant program offices based on a range of grant programs managed, legacy systems used, and the amount of grant funding awarded. We also sought to select a cross section of different characteristics, such as selecting larger grant program offices, as well as smaller offices. In addition, we ensured that our selection included the Assistance to Firefighters Grants (AFG) program office because officials in this office represent the first GMM users and, therefore, are more actively involved with the program’s Agile development practices. Based on these factors, we selected: Public Assistance Division, Individual Assistance Division, AFG, and National Fire Academy. Additionally, the four selected grant program offices are responsible for 16 of the total 45 grant programs and are users of five of the nine primary legacy IT systems. The four selected grant program offices also represent about 68 percent of the total grant funding awarded by FEMA from fiscal years 2005 through 2016. We selected two regional offices based on (1) the largest amount of total FEMA grant funding for fiscal years 2005 through 2016—Region 6 located in Denton, Texas; and (2) the highest percentage of AFG funding compared to the office’s total grant funding awarded from fiscal years 2005 through 2016—Region 5 located in Chicago, Illinois. To assess the reliability of data from the program’s automated IT requirements management tool, we interviewed knowledgeable officials about the quality control procedures used by the program to assure accuracy and completeness of the data. We also compared the data to other relevant program documentation on GMM requirements. We determined that the data used were sufficiently reliable for the purpose of evaluating GMM’s practices for managing IT requirements. For our second objective, to assess the reliability of GMM’s estimated costs and schedule, we reviewed documentation on GMM’s May 2017 lifecycle cost estimate and on the program’s schedule, dated May 2018. To assess the reliability of the May 2017 lifecycle cost estimate, we evaluated documentation supporting the estimate, such as the cost estimating model, the report on GMM’s Cost Estimating Baseline Document and Life Cycle Cost Estimate, and briefings provided to the Department of Homeland Security (DHS) and FEMA management regarding the cost estimate. We assessed the cost estimating methodologies, assumptions, and results against leading practices for developing a comprehensive, accurate, well-documented, and credible cost estimate, identified in GAO’s Cost Estimating and Assessment Guide. We also interviewed program officials responsible for developing and reviewing the cost estimate to understand their methodology, data, and approach for developing the estimate. We found that the cost data were sufficiently reliable. To assess the reliability of the May 2018 GMM program schedule, we evaluated documentation supporting the schedule, such as the integrated master schedule, acquisition program baseline, and Agile artifacts. We assessed the schedule documentation against leading practices for developing a comprehensive, well-constructed, credible, and controlled schedule, identified in GAO’s Schedule Assessment Guide. We also interviewed GMM program officials responsible for developing and managing the program schedule to understand their practices for creating and maintaining the schedule. We noted in our report the instances where the quality of the schedule data impacted the reliability of the program’s schedule. For both the cost estimate and program schedule, we assessed each leading practice as: fully addressed—FEMA provided complete evidence that showed it implemented the entire practice area; substantially addressed—FEMA provided evidence that showed it implemented more than half of the practice area; partially addressed—FEMA provided evidence that showed it implemented about half of the practice area; minimally addressed—FEMA provided evidence that showed it implemented less than half of the practice area; not addressed—FEMA did not provide evidence that showed it implemented any of the practice area. Finally, we provided FEMA with draft versions of our detailed analyses of the GMM cost estimate and schedule. This was done to verify that the information on which we based our findings was complete, accurate, and up-to-date. Regarding our third objective, to determine the extent to which FEMA is addressing key cybersecurity practices for GMM, we reviewed documentation regarding DHS and FEMA cybersecurity policies and guidance, and FEMA’s authorization to operate for the program’s engineering and test environment. We evaluated the documentation against all six cybersecurity practices identified in the National Institute of Standards and Technology’s (NIST) Risk Management Framework. While NIST’s Risk Management Framework identifies six total practices, for reporting purposes, we combined two interrelated practices—selection of security controls and implementation of security controls—into a single practice. The resulting five practices were: categorizing the system based on security risk, selecting and implementing security controls, assessing security controls, obtaining an authorization to operate the system, and monitoring security controls on an ongoing basis. We obtained and analyzed key artifacts supporting the program’s efforts to address these risk management practices, including the program’s System Security Plan, the Security Assessment Plan and Report, Authorization to Operate documentation, and the program’s continuous monitoring documentation. We also interviewed officials from the GMM program office and FEMA’s Office of the Chief Information Officer, such as the GMM Security Engineering Lead, GMM Information System Security Officer, and FEMA’s Acting Chief Information Security Officer, regarding their efforts to assess, document, and review security controls for GMM. We assessed the evidence against the five practices to determine the extent to which the agency had addressed them. We then assessed each practice area as: fully addressed—FEMA provided complete evidence that showed it fully implemented the practice area; partially addressed—FEMA provided evidence that showed it partially implemented the practice area; not addressed—FEMA did not provide evidence that showed it implemented any of the practice area. To assess the reliability of data from the program’s automated security controls management tool, we interviewed knowledgeable officials about the quality control procedures used by the program to assure accuracy and completeness of the data. We also compared the data to other relevant program documentation on GMM security controls for the engineering and test environment. We found that some of the security controls data we examined were sufficiently reliable for the purpose of evaluating FEMA’s cybersecurity practices for GMM, and we noted in our report the instances where the accuracy of the data impacted the program’s ability to address key cybersecurity practices. We conducted this performance audit from December 2017 to April 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Federal Emergency Management Agency’s Grant Programs The Federal Emergency Management Agency (FEMA) awards many different types of grants to state, local, and tribal governments and nongovernmental entities. These grants are to help communities prevent, prepare for, protect against, mitigate the effects of, respond to, and recover from disasters and terrorist attacks. Appendix III: Overview of Agile Software Development Agile software development is a type of incremental development that calls for the rapid delivery of software in small, short increments. The use of an incremental approach is consistent with the Office of Management and Budget’s guidance as specified in its information technology (IT) Reform Plan, as well as the legislation commonly referred to as the Federal Information Technology Acquisition Reform Act. Many organizations, especially in the federal government, are accustomed to using a waterfall software development model, which typically consists of long, sequential phases, and differs significantly from the Agile development approach. Agile practices integrate planning, design, development, and testing into an iterative lifecycle to deliver software early and often. Figure 7 provides a depiction of software development using the Agile approach, as compared to a waterfall approach. The frequent iterations of Agile development are intended to effectively measure progress, reduce technical and programmatic risk, and respond to feedback from stakeholders in changes to IT requirements more quickly than traditional methods. Despite these intended benefits, organizations adopting Agile must overcome challenges in making significant changes to how they are accustomed to developing software. The significant differences between Agile and waterfall development impact how IT programs are planned, implemented, and monitored in terms of cost, schedule, and scope. For example, in waterfall development, significant effort is devoted upfront to document detailed plans and all IT requirements for the entire scope of work at the beginning of the program, and cost and schedule can be varied to complete that work. However, for Agile programs the precise details are unknown upfront, so initial planning of cost, scope, and timing would be conducted at a high level, and then supplemented with more specific plans for each iteration. While cost and schedule are set for each iteration, requirements for each iteration (or increment) can be variable as they are learned over time and revised to reflect experiences from completed iterations and to accommodate changing priorities of the end users. The differences in these two software development approaches are shown in figure 8. Looking at figure 8, the benefit provided from using traditional program management practices such as establishing a cost estimate or a robust schedule, is not obvious. However, unlike a theoretical environment, many government programs may not have the autonomy to manage completely flexible scope, as they must deliver certain minimal specifications with the cost and schedule provided. In those cases, it is vital for the team to understand and differentiate the IT requirements that are “must haves” from the “nice to haves” early in the planning effort. This would help facilitate delivery of the “must-haves” requirements first, thereby providing users with the greatest benefits as soon as possible. Appendix IV: Comments from the Department of Homeland Security Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, the following staff made key contributions to this report: Shannin G. O’Neill (Assistant Director), Jeanne Sung (Analyst in Charge), Andrew Beggs, Rebecca Eyler, Kendrick Johnson, Thomas J. Johnson, Jason Lee, Jennifer Leotta, and Melissa Melvin.
Why GAO Did This Study FEMA, a component of DHS, annually awards billions of dollars in grants to help communities prepare for, mitigate the effects of, and recover from major disasters. However, FEMA's complex IT environment supporting grants management consists of many disparate systems. In 2008, the agency attempted to modernize these systems but experienced significant challenges. In 2015, FEMA initiated a new endeavor (the GMM program) aimed at streamlining and modernizing the grants management IT environment. GAO was asked to review the GMM program. GAO's objectives were to (1) determine the extent to which FEMA is implementing leading practices for reengineering its grants management processes and incorporating needs into IT requirements; (2) assess the reliability of the program's estimated costs and schedule; and (3) determine the extent to which FEMA is addressing key cybersecurity practices. GAO compared program documentation to leading practices for process reengineering and requirements management, cost and schedule estimation, and cybersecurity risk management, as established by the Software Engineering Institute, National Institute of Standards and Technology, and GAO. What GAO Found Of six important leading practices for effective business process reengineering and information technology (IT) requirements management, the Federal Emergency Management Agency (FEMA) fully implemented four and partially implemented two for the Grants Management Modernization (GMM) program (see table). Specifically, FEMA ensured senior leadership commitment, took steps to assess its business environment and performance goals, took recent actions to track progress in delivering IT requirements, and incorporated input from end user stakeholders. However, FEMA has not yet fully established plans for implementing new business processes or established complete traceability of IT requirements. Until FEMA fully implements the remaining two practices, it risks delivering an IT solution that does not fully modernize FEMA's grants management systems. While GMM's initial May 2017 cost estimate of about $251 million was generally consistent with leading practices for a reliable, high-quality estimate, it no longer reflects current assumptions about the program. FEMA officials stated in December 2018 that they had completed a revised cost estimate, but it was undergoing departmental approval. GMM's program schedule was inconsistent with leading practices; of particular concern was that the program's final delivery date of September 2020 was not informed by a realistic assessment of GMM development activities, and rather was determined by imposing an unsubstantiated delivery date. Developing sound cost and schedule estimates is necessary to ensure that FEMA has a clear understanding of program risks. Of five key cybersecurity practices, FEMA fully addressed three and partially addressed two for GMM. Specifically, it categorized GMM's system based on security risk, selected and implemented security controls, and monitored security controls on an ongoing basis. However, the program had not initially established corrective action plans for 13 medium- and low-risk vulnerabilities. This conflicts with the Department of Homeland Security's (DHS) guidance that specifies that corrective action plans must be developed for every weakness identified. Until FEMA, among other things, ensures that the program consistently follows the department's guidance on preparing corrective action plans for all security vulnerabilities, GMM's system will remain at increased risk of exploits. What GAO Recommends GAO is making eight recommendations to FEMA to implement leading practices related to reengineering processes, managing requirements, scheduling, and implementing cybersecurity. DHS concurred with all recommendations and provided estimated dates for implementing each of them.
gao_GAO-18-565
gao_GAO-18-565_0
Background Considerations for Exchange Enrollment and Plan Selection Qualified health plans sold through the exchanges must meet certain minimum requirements, including those related to benefits coverage. Beyond these requirements, many elements of plans can vary, including their cost and availability. Those who opt to enroll in a plan generally pay for their health care in two ways: (1) a premium to purchase the insurance, and (2) cost-sharing for the particular health services they receive (for example, deductibles, coinsurance, and co-payments). Metal Tiers Qualified health plans are offered at one of four metal tiers that reflect the out-of-pocket costs that may be incurred by a consumer. These tiers correspond to the plan’s actuarial value—a measure of the relative generosity of a plan’s benefits that is expressed as a percentage of the covered medical expenses expected to be paid, on average, by the issuer for a standard population and set of allowed charges for in-network providers. In general, as actuarial value increases, consumer cost- sharing decreases. The actuarial values of the metal tiers are: bronze (60 percent), silver (70 percent), gold (80 percent), and platinum (90 percent). If an issuer sells a qualified health plan on an exchange, it must offer at least one plan at the silver level and one plan at the gold level; issuers are not required to offer bronze or platinum plans. Financial Assistance Individuals purchasing coverage through the exchanges may be eligible, depending on their incomes, to receive financial assistance to offset the costs of their coverage. According to HHS, more than 80 percent of enrollees obtained financial assistance in the first half of 2017, which came in the form of premium tax credits or cost-sharing reductions. Premium tax credits. These are designed to reduce an eligible individual’s premium costs, and can either be paid in advance on a monthly basis to an enrollee’s issuer—referred to as advance premium tax credits—or received after filing federal income taxes for the prior year. To be eligible for premium tax credits, enrollees must generally have household incomes of at least 100, but no more than 400, percent of the federal poverty level. The amount of the premium tax credit varies based on enrollees’ income relative to the cost of premiums for their local benchmark plan—which is the second lowest cost silver plan available—but consumers do not need to be enrolled in the benchmark plan in order to be eligible for these tax credits. Cost-sharing reductions. Enrollees who qualify for premium tax credits, have household incomes between 100 and 250 percent of the federal poverty level, and enroll in a silver tier plan may also be eligible to receive cost-sharing reductions, which lower enrollees’ deductibles, coinsurance, and co-payments. To reimburse issuers for reduced cost-sharing from qualified enrollees, HHS made payments to issuers (referred to as cost-sharing reduction payments) until October 2017, when it discontinued these payments. Despite HHS’s decision to discontinue cost-sharing reduction payments, issuers are still required under PPACA to offer cost-sharing reductions to eligible enrollees. Since consumers who receive these reductions are generally enrolled in silver plans, insurance commissioners in most states instructed the issuers in their states to increase 2018 premiums for silver plans offered on the exchanges to reflect the discontinued federal payments. This has been referred to as “silver-loading” and resulted in substantial increases in exchange-based silver plan premiums for 2018. (See fig.1.) Because the amount of an eligible enrollee’s premium tax credit is based on the premium for the enrollee’s local benchmark plan (the second lowest cost silver plan available to an enrollee), the value of this form of financial assistance also increased significantly for 2018. As we have previously reported, the number and type of plans available in the health insurance exchanges varies from year to year. Issuers can add new plans and adjust or discontinue existing plans from year to year, as long as the plans meet certain minimum requirements—such as covering essential health benefits. Issuers can also extend or restrict the locations in which they offer plans. According to HHS, while individuals seeking 2018 coverage were able to select from an average of 25 plans across the various metal tiers, 29 percent of consumers were able to select from plans from only one issuer. Exchange Outreach HHS performs outreach to increase awareness of the open enrollment period and facilitate enrollment among healthcare.gov consumers— including those new to the exchanges as well as those returning to renew their coverage. Outreach to these different types of enrollees can vary. For example, while outreach to those new to the exchanges may focus more on the importance of having insurance, outreach to existing enrollees may focus on encouraging them to go back to the exchange to shop for the best option. Consumer Assistance All exchanges are required to carry out certain functions to assist consumers with their applications for enrollment and financial assistance, among other things. HHS requires exchanges to operate a website and toll-free call center to address the needs of consumers requesting assistance with enrollment, and to conduct outreach and educational activities to help consumers make informed decisions about their health insurance options. HHS administers the federal healthcare.gov website, which allows consumers in states using the website for enrollment to directly compare health plans based on a variety of factors, such as premiums and provider networks. HHS also operates a Marketplace Call Center to respond to consumer questions about enrollment. Consumers may apply for coverage through the call center, the website, via mail, or in person (in some areas), with assistance from navigator organizations or agents and brokers. Navigators. PPACA required all exchanges to establish “navigator” programs to conduct public education activities to raise awareness of the availability of coverage available through the exchanges, among other things. As part of HHS’s funding agreement with navigator organizations in states using the federally facilitated exchange, HHS requires them to maintain relationships with consumers who are uninsured or underinsured. They must also examine consumers’ eligibility for other government health programs, such as Medicaid, and provide other assistance to consumers—for example, by helping them understand how to access their coverage. Agents and Brokers. Licensed by states, agents and brokers may also provide assistance to those seeking to enroll in a health plan sold on the exchanges; however, they are generally paid by issuers. They may sell products for one issuer from which they receive a salary, or from a variety of issuers and be paid a commission for each plan they sell. Enrollment through Healthcare.gov Was 5 Percent Lower in 2018 than 2017, and Stakeholders Reported That Plan Affordability Likely Played a Major Role in Enrollment Exchange Enrollment through Healthcare.gov Was 5 Percent Lower in 2018 than 2017 About 8.7 million consumers enrolled in healthcare.gov plans during the open enrollment period for 2018 coverage, 5 percent less than the 9.2 million who enrolled for 2017. This decline continues a trend from 2016, when a peak of 9.6 million consumers enrolled in such plans. Since that peak, enrollment has decreased by 9 percent. Enrollment in plans sold by state-based exchanges that use their own enrollment website has remained relatively stable during the same time period, with just over 3.0 million enrollees each year since 2016. Overall, enrollment in federal and state exchanges has declined 7 percent from a peak of nearly 12.7 million enrollees in 2016, largely driven by the decrease in enrollment in exchanges using healthcare.gov. (See table 1.) HHS officials told us that they did not want to speculate on the specific factors that affected enrollment this year, but noted that the exchanges are designed for consumers to utilize as needed, which includes degrees of fluctuation from year to year. A decreased demand for exchange-based insurance could be influenced by increases in the numbers of people with other types of health coverage, such as coverage through other public programs, or that which is sponsored by their employers. Enrollees who were new to healthcare.gov coverage comprised a smaller proportion of total enrollees in 2018 than in 2017, continuing a trend seen in prior years. The proportion of new enrollees decreased from 33 percent (3 million) in 2017 to 28 percent (2.5 million) in 2018 (see fig. 2). Some stakeholders noted the importance of enrolling new, healthy enrollees each year to maintain the long-term viability of the exchanges. However, other stakeholders noted that they had expected the number and proportion of new enrollees to decrease over time because a large majority of those who wanted coverage and were eligible for financial assistance had likely already enrolled. The increasing proportion of enrollees who return to the exchanges for their coverage could also demonstrate their need for or satisfaction with this coverage option. The demographic characteristics of enrollees remained largely constant from 2017 through 2018. For example, the proportion of enrollees with household incomes of 100 to 250 percent of the federal poverty level remained similar at 71 percent in 2017 and 70 percent in 2018. In addition, the proportion of enrollees whose households were located in rural areas was 18 percent in both years. However, the proportion of healthcare.gov enrollees aged 55 and older increased from 27 percent in 2017 to 29 percent in 2018. Appendix III provides detailed information on the characteristics of enrollees in 2017 and 2018. Stakeholders Reported That Plan Affordability Likely Played a Major Role in 2018 Exchange Enrollment and Plan Selection According to stakeholders we interviewed, plan affordability likely played a major role in 2018 exchange enrollment—both attracting and detracting from enrollment—and enrollees’ plan selection. In 2018, premiums across all healthcare.gov plans increased an average of 30 percent—more than expected given overall health cost trends. As a result of these premium increases, plans were less affordable in 2018 compared to 2017 for exchange consumers without advance premium tax credits (15 percent in 2018). One driver of these premium increases was the elimination of federal cost-sharing reduction payments to issuers in late 2017, which resulted in larger premium increases for silver tier plans (the most popular healthcare.gov metal tier). For example, among enrollees who did not use advance premium tax credits, the average monthly premium amount paid for silver plans increased 45 percent (from $424 in 2017 to $614 in 2018). Average premiums for these enrollees also increased for bronze and gold plans, but not by as much—22 percent for bronze plans (from $374 in 2017 to $455 in 2018) and 23 percent for gold plans (from $509 in 2017 to $628 in 2018). Most stakeholders we interviewed told us the decreased affordability of plans likely resulted in lower enrollment in exchange plans for these consumers. Some stakeholders we interviewed reported personally encouraging consumers who were not eligible for premium tax credits to purchase their coverage off of the exchanges, where they could often purchase the same health insurance plan for a lower price. However, despite overall premium increases, plans became more affordable for the more than 85 percent of exchange consumers who used advance premium tax credits, because the value of the premium tax credits increased significantly in order to compensate for the higher premiums of silver plans. For example, the average value of monthly advance premium tax credits for those enrolled in any exchange plan increased 44 percent, from $383 in 2017 to $550 in 2018—the largest increase in the program’s history. As a result, enrollees who used advance premium tax credits faced lower net monthly premiums on average in 2018 than they had in 2017—specifically, enrollees’ average net monthly premiums across all plans decreased 16 percent from $106 in 2017 to $89 in 2018. According to most stakeholders we interviewed, the enhanced affordability of net monthly premiums among consumers who used advance premium tax credits likely encouraged enrollment among this group. (See fig. 3). Stakeholders we interviewed also noted that plan affordability likely played a major role in enrollees’ plan selection, including the metal tier of their coverage. This finding is consistent with our prior work which showed that plan cost—including premiums—is a driving factor in exchange enrollees’ selection of a plan. Specifically, we found that while silver plans remained the most popular healthcare.gov metal tier, covering 65 percent of all enrollees in 2018, this proportion decreased 9 percentage points from 2017 as more enrollees selected bronze and gold plans. (See fig. 4.) Stakeholders reported that consumers using advance premium tax credits benefitted from enhanced purchasing power in 2018 due to the impact of silver loading, which likely served as a driving factor in these consumers’ plan selections. Specifically, they noted that the increased availability of free bronze and low-cost gold plans (after tax credits were applied) for such consumers likely explained why many enrollees moved from silver to bronze or gold plans for 2018. While average monthly net premiums paid by these consumers decreased overall from 2017 to 2018 due to the tax credits, the changes were most pronounced for those enrolled in bronze or gold plans (which decreased 36 and 39 percent, respectively), compared to silver plans (which decreased 13 percent). Separately, the enhanced affordability of gold plans, along with the richer benefits they offer, likely led some consumers to move from silver to gold plans in 2018. While the average monthly net premium amount paid for gold plans in 2018 ($207) remained higher than that for less generous silver plans ($88) among those using advance premium tax credits, it was nearly 40 percent lower than the average net premium for gold plans in 2017 ($340). Stakeholders also reported that consumers in some areas were able to access gold plans for a lower cost than silver plans. The proportion of enrollees in gold plans using advance premium tax credits increased from 49 percent to 74 percent—signaling that many enrollees used their higher tax credits to enroll in richer gold plan coverage. As the proportion of enrollees with silver plans declined for 2018, so too did the proportion of enrollees with cost-sharing reductions—which are generally only available to those with silver plans. Specifically, 54 percent of healthcare.gov enrollees received these subsidies in 2018, 6 percentage points lower than the 60 percent who received these subsidies in 2017. Stakeholders Reported That a Variety of Other Factors Likely Affected 2018 Enrollment Stakeholders we interviewed reported that a variety of factors other than plan affordability also likely affected 2018 exchange enrollment, but opinions on the impact of each factor were mixed. Specifically, most stakeholders we interviewed, including all 4 navigator organizations and 3 professional trade organizations, reported that consumer confusion about PPACA and its status likely played a major role in detracting from 2018 healthcare.gov enrollment. Some of these stakeholders attributed consumers’ confusion about the exchanges to efforts to repeal and replace PPACA. In addition, many stakeholders attributed consumer confusion to the Administration’s negative statements about PPACA. Further, many stakeholders reported that as a result of the public debate during 2017 over whether to repeal and replace PPACA many consumers had questions about whether the law had been repealed and whether insurance coverage was still available through the exchanges. However, other stakeholders reported that this debate likely did not affect enrollment and consumers who were in need of exchange-based coverage were likely able to find the information they required to enroll. In addition, many stakeholders noted that consumer understanding and enrollment was aided through increased outreach and education events conducted by many groups, including some state and local governments, hospitals, issuers, and community groups. Many stakeholders also noted that the volume of exchange-related news increased significantly before and during the open enrollment period for 2018 coverage, in part due to the ongoing political debate about the future of the exchanges. These stakeholders agreed that this increase in reporting about the exchanges likely resulted in increased consumer awareness and enrollment, even in cases where the coverage negatively portrayed the exchanges. Many stakeholders also said that reductions in HHS outreach and advertising of the open enrollment period likely detracted from 2018 enrollment, in part because any reduction in promoting enrollment detracts from overall consumer awareness and understanding of the program and its open enrollment period. In particular, some stakeholders reported that outreach and advertising are especially important for increasing new enrollment, especially among younger and healthier consumers whose enrollment can help ensure the long-term stability of the exchanges. However, other stakeholders reported that these reductions likely had no effect on enrollment, noting that most consumers who needed exchange-based coverage were already enrolled in it and were well aware of the program, and also noting that enrollment in 2018 did not dramatically change compared with that of 2017. Stakeholders we interviewed were largely divided on the effects of other factors on 2018 healthcare.gov enrollment, including the shorter 6-week open enrollment period. For example, about half of the stakeholders said that the shorter open enrollment period likely led fewer to enroll due to lack of consumer awareness of the new deadline, as well as to challenges related to the reduced capacity of those helping consumers to enroll. However, many others said that the shorter open enrollment period likely had no effect. In particular, some of these stakeholders noted that enrollment in 2018 was similar to that for 2017 and that during prior open enrollment periods the majority of consumers had enrolled by December 15, as this was the deadline for coverage that began in January. Figure 5 displays the range of stakeholder views on factors affecting 2018 healthcare.gov enrollment, and appendix IV provides selected stakeholder views of factors affecting 2018 healthcare.gov enrollment. HHS Reduced Consumer Outreach for 2018 and Used Problematic Data to Allocate Navigator Funding HHS reduced its consumer outreach—including paid advertising and navigator funding—for the 2018 open enrollment period. Further, HHS allocated the navigator funding using a narrower approach and problematic data, including consumer application data that it acknowledged were unreliable and navigator organization-reported goal data that were based on an unclear description of the goal, and which HHS and navigator organizations likely interpreted differently. HHS Reduced Paid Advertising HHS reduced the amount it spent on paid advertising for the 2018 open enrollment period by 90 percent, spending $10 million as compared to the $100 million it spent for the 2017 open enrollment period. HHS officials reported that their 2018 advertising approach was a success, noting that they cut wasteful spending on advertising, which resulted in a more cost- effective approach. HHS officials told us that the agency elected to reduce funding for paid advertising to better align with its spending on paid advertising for the Medicare open enrollment period. According to the officials, HHS targeted its reduced funding toward low-cost forms of paid advertising that HHS studies showed were effective in driving enrollment, and that could be targeted to specific populations, such as individuals aged 18 to 34 and individuals who had previously visited healthcare.gov. For example, for 2018, HHS spent about 40 percent of its paid advertising budget on two forms of advertising aimed at reaching these populations. Specifically, HHS spent $1.2 million on the creation of two digital advertising videos that were targeted to potential young enrollees, and $2.7 million on search advertising, in which Internet search engines displayed a link to healthcare.gov when individuals used relevant search terms. HHS followed up with individuals that visited the link to encourage them to enroll. Agency officials said they focused some of their paid advertising on individuals aged 18 to 34 because in the prior open enrollment period many individuals in this age range enrolled after December 15—the deadline for the 2018 open enrollment period. HHS officials said they did not use paid television advertising because it was too expensive and because it was not optimal for attracting young enrollees—although a 2017 HHS study found this was one of the most effective forms of paid advertising for enrolling new and returning individuals during the prior open enrollment period. See appendix V for HHS’s expenditures for paid advertising for the 2017 and 2018 open enrollment periods. HHS Reduced Navigator Funding and Used a Narrower Approach and Problematic Data to Allocate It HHS reduced navigator funding by 42 percent for 2018, spending $37 million compared to the $63 million it spent for 2017. According to HHS officials, the agency reduced this funding due to a shift in the Administration’s priorities. For the 2018 open enrollment period, HHS planned to rely more heavily on agents and brokers—another source of in-person consumer assistance, who, unlike navigator organizations that are funded through federal grants, are generally paid for by the issuers they represent. HHS took steps to highlight their availability to help consumers and enable consumers to enroll through them. For example, for the 2018 open enrollment period, HHS made a new “Help on Demand” tool available on healthcare.gov that connected consumers directly to local agents or brokers. HHS also developed a streamlined enrollment process for those enrolling through agents and brokers. HHS also changed its approach for allocating the navigator funding to focus on a narrower measure of navigator organization performance than it had used in the past. According to HHS officials, in prior years, HHS awarded funding based on navigator organizations’ performance on a variety of tasks, such as the extent to which navigator organizations met their self-imposed goals for numbers of public outreach events and individuals assisted with applications for exchange coverage and selection of exchange plans. HHS officials said the agency previously also took state-specific factors, such as the number of uninsured individuals in a state, into account when awarding funding. HHS calculated preliminary navigator funding awards for 2018 using this approach. However, according to HHS officials, the agency later decided to change both its budget and approach for allocating navigator funding for 2018 to hold navigator organizations more accountable for the number of individuals they enrolled in exchange plans. In its new funding allocation approach, rather than taking into account navigator organization performance on a variety of tasks, HHS only considered performance in achieving one goal—the number of individuals each navigator organization planned to assist with selecting or enrolling in exchange plans for 2017 coverage. In implementing this new approach, HHS compared the number of enrollees whose 2017 exchange coverage applications included navigator identification numbers with each navigator organization’s self-imposed goal. For navigator organizations that did not appear to meet their goals, HHS decreased their preliminary 2018 award amounts proportionately. For navigator organizations that appeared to meet or exceed their goals, HHS left their preliminary 2018 award amounts unchanged. Based on this change in approach, HHS offered 81 of its 98 navigator organizations less funding for 2018, with decreases ranging from less than 1 percent to 98 percent of 2017 funding levels. HHS offered 4 of the 98 navigator organizations increased funding and 13 the same level of funding they received for 2017 (see fig. 6). We found that the data HHS used for its revised funding approach were problematic for multiple reasons. In particular, prior to using the 2017 consumer application data as part of its 2018 funding calculations, HHS had acknowledged that these data were unreliable, in part because navigators were not consistently entering their identification numbers into applications during the 2017 open enrollment period. Specifically, HHS stated in a December 9, 2016, email to navigator organizations that the application data were unreliable and thus could not be used. Over 4 million individuals had enrolled in 2017 coverage by December 10, 2016, so it is likely that many of the applications that HHS used in its 2018 funding calculation included incomplete or inaccurate information with respect to navigator assistance. HHS provided guidance to navigator organizations in the December 2016 email on the importance of, and locations for, entering identification numbers into applications to help improve the reliability of the data. However, some data reliability issues may have remained throughout the 2018 open enrollment period, as two of the navigator organizations we interviewed reported ongoing challenges entering navigator identification numbers into applications during this period. For example, representatives from one navigator organization reported that the application field where navigators enter their identification number was at times pre-populated with an agent or broker’s identification number. Consumer application data may therefore still be unreliable for use in HHS navigator funding decisions that would be expected later this year for 2019. Moreover, the 2017 goal data that HHS used in its funding calculation were also problematic because HHS described the goal in an unclear manner when it asked navigator organizations to set their goals. As a result, HHS’s interpretation of the goal was likely different than how it was interpreted and established by navigator organizations. Specifically, in its award application instructions, HHS asked navigator organizations to provide a goal for the number of individuals that they “expected to be assisted with selecting/enrolling in (including re- enrollments)” but HHS did not provide guidance to navigator organizations on how it would interpret the goal. HHS officials told us that they wanted to allow navigator organizations full discretion in setting their goals, since the organizations know their communities best. In its funding calculation, HHS interpreted this goal as the number of individuals navigator organizations planned to enroll in exchange plans. However, as written in the award application instructions, the goal could be interpreted more broadly, because not all individuals whom navigators assist with the selection of exchange plans ultimately apply and enroll in coverage. Representatives from one navigator organization we spoke with said they did interpret this goal more broadly than how it was ultimately interpreted by HHS—and thus set it as the number of consumers they planned to assist in a variety of ways, not limiting it to those they expected to assist through to the final step of enrollment in coverage. The navigator organization therefore set a higher goal than it otherwise would have, had it understood HHS’s interpretation of the goal, and ultimately received a decrease in funding for 2018. As a result, we found that two of the three inputs in HHS’s calculation of 2018 navigator organization awards were problematic (see fig. 7). HHS’s reduced funding and revised funding allocation approach resulted in a range of implications for navigator organizations. According to HHS officials, eight of the navigator organizations that were offered reduced funding for 2018—with reductions ranging from 50 to 98 percent of 2017 funding levels—declined their awards and withdrew from the program. HHS reported asking the remaining navigator organizations to focus on re-enrolling consumers who had coverage in 2017 and resided in areas where issuers reduced or eliminated plan offerings for 2018, and informing consumers about the shortened open enrollment period for 2018 coverage. Representatives of the navigator community group we interviewed reported that many navigator organizations did focus their resources on enrollment and cut back on outreach efforts, particularly in rural areas. According to self-reported navigator organization data provided by HHS, navigator organizations collectively reported conducting 68 percent fewer outreach events during the 2018 open enrollment period as compared to the 2017 period. Representatives from the navigator organizations we interviewed also reported making changes to their operations; for example, officials from one of the navigator organizations reported cutting staff and rural office locations. Officials from another navigator organization said that they focused their efforts on contacting prior exchange enrollees to assist them with re-enrollment, instead of finding and enrolling new consumers, and de-prioritized assistance with Medicaid enrollment. The three navigator organizations we spoke with that had funding cuts for 2018 also reported that their ability to perform the full range of navigator duties during the rest of the year would be compromised because they needed to make additional cuts in their operations—such as reducing staff and providing less targeted assistance to underserved populations—in order to reduce total costs. One of the three navigator organizations reported that it may go out of business at the end of the 2018 award year. HHS’s narrower approach to awarding funding; lack of reliable, complete data on the extent to which navigator organizations enrolled individuals in exchange plans; and lack of clear guidance to navigator organizations on how to set their goals could hamper the agency’s ability to use the program to meet its objectives. Federal internal control standards state that management should use quality information to achieve the agency’s objectives, such as by using relevant, reliable data for decision-making. Without reliable performance data and accurate goals, HHS will be unable to measure the effectiveness of the navigator program and take informed action as necessary. Further, because HHS calculated awards using problematic data, navigator organizations may have received awards that did not accurately reflect their performance in enrolling individuals in exchange plans. Additionally, HHS’s narrow focus on exchange enrollment limited its ability to make decisions based on relevant information. Moving forward, this may affect navigator organizations’ interests and abilities in providing a full range of services to their communities, including underserved populations. This, in turn, could affect HHS’s ability to meet its objectives, such as its objective of improving Americans’ access to health care. HHS Did Not Set Numeric Enrollment Targets for 2018, and Instead Focused on Enhancing Certain Aspects of Consumers’ Experiences HHS did not set any numeric enrollment targets for 2018 related to total healthcare.gov enrollment, as it had in prior years. In prior years, HHS used numeric targets to monitor enrollment progress during the open enrollment period and focus its resources on those consumers that it believed had a high potential to enroll in exchange coverage. For example, HHS established a target of enrolling a total of 13.8 million individuals during the 2017 open enrollment period and also set numeric enrollment targets for 15 regional markets that the agency identified as presenting strong opportunities for meaningful enrollment increases, partly due to having a high percentage of eligible uninsured individuals. HHS used these regional target markets to focus its outreach, travel, and collaborations with local partners. According to agency officials, during prior open enrollment periods, HHS monitored its performance with respect to its targets and revised its outreach efforts in order to better meet its goals. According to federal internal control standards, agencies should design control activities to achieve their objectives, such as by establishing and monitoring performance measures. HHS has recognized the importance of these internal controls by requiring state-based exchanges to develop performance measures and report on their progress. Without developing numeric targets for healthcare.gov enrollment, HHS’s ability to both perform high level assessments of its performance and progress and to make critical decisions about how to use its resources is hampered. HHS may also be unable to ensure that it meets its objectives—including its current objective of improving Americans’ access to health care, including by stabilizing the market and implementing policies that increase the mix of younger and healthier consumers purchasing plans through the individual market. HHS leadership decided against setting numeric enrollment targets for the 2018 open enrollment period and instead focused on a goal of enhancing the consumer experience, according to HHS officials. Specifically, HHS officials measured the consumer experience based on its assessment of healthcare.gov availability and functionality, and call center availability and customer satisfaction. HHS officials told us that they selected these measures of the consumer experience because healthcare.gov and the call center represent two of the largest channels through which consumers interact with the exchange. HHS reported meeting its goal based on consumers’ improved experiences with these two channels, some of which had been problematic in the past. (See fig. 8.) Healthcare.gov. According to HHS officials, the healthcare.gov website achieved enhanced availability and functionality for the 2018 open enrollment period, continuing a trend in improvements over prior years. While HHS scheduled similar periods of healthcare.gov downtime for maintenance in 2017 and 2018, the website had less total downtime during the 2018 open enrollment period because the agency needed to conduct less maintenance. HHS officials attributed the increased availability in part to an operating system upgrade and comprehensive testing of the website that they conducted before the 2018 open enrollment period began. In addition, unlike prior years, HHS officials said that the agency published scheduled maintenance information for 2018 to reduce scheduling conflicts for consumers and groups providing enrollment assistance. HHS also reported enhancing the functionality of the website for the 2018 open enrollment period, including by adding new tools, such as a “help on demand” feature that links consumers with a local agent or broker willing to assist them, as well as updated content that included more plain language. Many stakeholders we interviewed told us that healthcare.gov functioned well during the open enrollment period and was more available than it had been in prior years. Call Center Assistance. According to HHS officials, the call center reduced wait times and improved customer satisfaction scores in 2018, continuing a trend in improvements over prior years. HHS officials reported average wait times of 5 minutes, 38 seconds for the 2018 open enrollment period—almost four minutes shorter than the average wait time experienced during a comparable timeframe of the 2017 open enrollment period. HHS officials attributed this reduction in wait times to improvements in efficiency, including scripts that used fewer words and generated fewer follow-up questions. In addition, there was a modest reduction in call center volume during similar timeframes of the 2017 and 2018 open enrollment periods. Officials from many stakeholders we interviewed reported that call center assistance was more readily available this year than it had been in prior years. HHS officials also reported an average call center customer satisfaction score of 90 percent in 2018 compared to 85 percent in 2017, based on surveys conducted at the end of customer calls. Although HHS officials reported that the agency met its goal of enhancing specific aspects of the consumer experience for the 2018 open enrollment period, HHS narrowly defined its goal and excluded certain aspects of the consumer experience that it had identified as key as recently as 2017. More specifically, in 2017, HHS reported that successful outreach and education events and the availability of in-person consumer assistance, such as that provided by navigators to help consumers understand plan options, were key aspects of the consumer experience. However, HHS did not include these key items when measuring progress toward their 2018 goal of enhancing the consumer experience. Federal internal control standards state that agencies should identify risks that affect their defined objectives and use quality information to achieve these objectives, including by identifying the information required to achieve the objectives and address related risks. By excluding key aspects of the consumer experience in its evaluation of its performance, HHS’s assessment of the consumer experience may be incomplete. For example, as noted above, some stakeholders we interviewed told us that consumer confusion likely detracted from enrollment for 2018, and some linked this outcome to HHS’s reduced role in promoting exchange enrollment, including navigator support, which may have resulted in less in-person consumer assistance through navigators. HHS’s assessment of the consumer experience, which focused only on consumers who used the website or reached out to the call center during open enrollment, did not account for the experiences of those who interacted with the health insurance exchanges through other channels, such as through navigators or agents and brokers. Conclusions Some experts have raised questions about the long-term stability of the exchanges absent sufficient enrollment, including among young and healthy consumers. To encourage exchange enrollment, HHS has traditionally conducted a broad outreach and education campaign, including funding navigator organizations that provide in-person enrollment assistance. For the 2018 open enrollment period, HHS reduced its support of navigator organizations and changed its approach for allocating navigator funding to focus on exchange enrollment alone. HHS allocated the funding based on performance data that were problematic for multiple reasons, including because some of the underlying data were unreliable. As a result, navigator organizations received funding that reflected a more limited evaluation of their performance than HHS had used in the past, and that may not have accurately reflected their performance. This raises the risk that navigator organizations will decrease the priority they place on fulfilling a range of other duties for which they are responsible, including providing assistance to traditionally underserved populations, which some navigator organizations we interviewed reported they had either decreased or planned to decrease due to reduced funding. HHS’s lack of complete and reliable data on navigator organization performance hampers the agency’s ability to make appropriately informed decisions about funding. Moreover, its focus on enrollment alone in awarding funding may affect navigator organizations’ ability to fulfill the full range of their responsibilities, which could in turn affect HHS’s ability to use the program as a way to meet its objective of enhancing Americans’ access to health care. In addition, the lack of numeric enrollment targets for HHS to evaluate its performance with respect to the open enrollment period hampers the agency’s ability to make informed decisions about its resources. HHS reported achieving a successful consumer experience for the 2018 open enrollment period based on enhancing its performance in areas that had been problematic in the past. However, the agency’s evaluation of its performance did not include aspects of the consumer experience that it identified in 2017 as key, and for which stakeholders reported problems in 2018. As a result, its assessment of its performance in enhancing the consumer experience was likely incomplete. Absent a more complete assessment, HHS may not have the information it needs to fully understand the consumer experience. Recommendations for Executive Action We are making the following three recommendations to HHS: The Secretary of HHS should ensure that the approach and data it uses for determining navigator award amounts accurately and appropriately reflect navigator organization performance, for example, by 1. providing clear guidance to navigator organizations on performance goals and other information they must report to HHS that will affect their future awards, 2. ensuring that the fields used to capture the information are functioning properly, and 3. assessing the effect of its current approach to funding navigator organizations to ensure that it is consistent with the agency’s objectives. (Recommendation 1) The Secretary of HHS should establish numeric enrollment targets for healthcare.gov, to ensure it can monitor its performance with respect to its objectives. (Recommendation 2) Should the agency continue to focus on enhancing the consumer experience as a goal for the program, the Secretary of HHS should assess other aspects of the consumer experience, such as those it previously identified as key, to ensure it has quality information to achieve its goal. (Recommendation 3) Agency Comments and Our Evaluation We provided a draft of this report to HHS for comment. In its comments, reproduced in appendix VI, HHS concurred with two of our three recommendations. HHS also provided technical comments, which we incorporated as appropriate. HHS concurred with our recommendation that it ensure that the approach and data it uses for determining navigator awards accurately and appropriately reflect navigator organization performance. In its comments on our draft report, HHS stated that it had notified navigator organizations that their funding would be linked to the organizations’ self-identified performance goals and their ability to meet those goals. On July 10, 2018, HHS issued its 2019 funding opportunity announcement for the navigator program, which required those applying for the award to set performance goals, including for the number of consumers assisted with enrollment and re-enrollment in exchange plans, and also states that failure to meet such goals may negatively impact a recipient’s application for future funding. In its comments, HHS also noted that it is in the process of updating the healthcare.gov website so that individual applications can hold the identification numbers of multiple entities, such as navigators, agents or brokers, and will work to ensure that the awards align with agency objectives. HHS also concurred with our recommendation that the agency assess other aspects of the consumer experience, such as those it previously identified as key, to ensure it has quality information to achieve its goal. HHS noted that it had assessed the consumer experience based on the availability of the two largest channels supporting exchange operations, and also noted that it will consider focusing on other aspects of the consumer experience as needed. HHS did not concur with our recommendation that the agency establish numeric enrollment targets for healthcare.gov, to ensure that it can monitor its performance with respect to its objectives. Specifically, HHS noted that there are numerous external factors that can affect a consumer’s decision to enroll in exchange coverage that are outside of the control of HHS, including the state of the economy and employment rates. HHS stated that it does not believe that enrollment targets are relevant to assess the performance of a successful open enrollment period related to the consumer experience. Instead, it believes a more informative performance metric would be to measure whether everyone who utilized healthcare.gov, who qualified for coverage, and who desired to purchase coverage, was able to make a plan selection. We continue to believe that the development of numeric enrollment targets is important for effective monitoring of the program and management of its resources. Without establishing numeric enrollment targets for upcoming open enrollment periods, HHS’s ability to evaluate its performance and make informed decisions about how it should deploy its resources is limited. We also believe that these targets could help the agency meet its program objectives of stabilizing the market and of increasing the mix of younger and healthier consumers purchasing plans through the individual market. Furthermore, HHS has previously demonstrated the ability to develop meaningful enrollment targets using available data. For example, in prior years, HHS developed numeric enrollment targets based on a range of factors, including the number of exchange enrollees, number of uninsured individuals, and changes in access to employer-sponsored insurance, Medicaid, and other public sources of coverage. In addition, the agency set numeric enrollment targets for regional markets that took these and other factors into account. Once these targets were established, HHS officials were able to use them to monitor progress throughout the open enrollment period and revise its efforts as needed. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of HHS. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or dickenj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. Appendix I: GAO List of Factors That May Have Affected 2018 Healthcare.gov Enrollment We identified a list of factors that may have affected 2018 healthcare.gov enrollment based on a review of Department of Health and Human Services information, interviews with health policy experts, and review of recent publications by these experts related to 2018 exchange enrollment. Factors related to the open enrollment period: Open enrollment conducted during a shorter 6-week open enrollment period. Consumer awareness of this year’s open enrollment deadline. Factors related to plan availability and plan choice: Plan affordability for consumers ineligible for financial assistance. Plan affordability for consumers eligible for financial assistance. Consumers’ perceptions of plan affordability. Availability of exchange-based plan choices. Availability of off-exchange plan choices. Consumer reaction to plan choices. Factors related to outreach and education: Reductions in federal funding allocated to outreach and education, and lack of television and other types of advertising. Top Administration and agency officials’ messaging about the health insurance exchanges and open enrollment. National and local media reporting on the exchanges and open enrollment. Local outreach and education events conducted by federally funded navigator organizations. Outreach and education efforts and/or advertising by some states, issuers, advocacy groups, community organizations, and agents and brokers. Factors related to enrollment assistance and tools: Availability of one-on-one enrollment assistance from federally funded navigator organizations. Availability of one-on-one enrollment assistance from agents and brokers. Updates to the content and function of the healthcare.gov website. Availability of the healthcare.gov website during the open enrollment period. Availability of assistance through the call center during the open enrollment period. Consumer understanding of the Patient Protection and Affordable Care Act and its status. Automatic re-enrollment occurred on the last day of the open enrollment period. Appendix II: Information about Stakeholders Interviewed Number of organizations interviewed 4 Navigator organizations were selected to reflect a range in: (1) amount of 2018 award from the Department of Health and Human Services (HHS); (2) change in HHS award amount from 2017; (3) region; and (4) target population. Insurance departments in six states that use the federally facilitated exchanges were selected to reflect a range with respect to: (1) 2018 healthcare.gov enrollment outcomes; (2) strategies used for calculating 2018 premiums to compensate for the loss of federal cost-sharing reduction payments; (3) changes in 2018 navigator organization award amounts; and (4) the number of issuers offering 2018 exchange coverage in the state. 3 Three issuers were selected who offered 2018 plans on healthcare.gov exchanges; two of which sold exchange plans in multiple states. 5 Five research and consumer advocacy organizations were selected to provide a range of perspectives with respect to the law and issues related to exchange outreach and enrollment. 3 Three professional trade associations were selected to collectively represent the perspectives of regulators, issuers, and consumer assisters. 2 Two state-based exchanges were selected based on the length of their open enrollment periods—one had one of the shortest open enrollment periods and the other had one of the longest open enrollment periods for 2018. Navigator organizations, among other things, carry out public education activities and help consumers enroll in a health insurance plan offered through the exchange. HHS awards financial assistance to navigator organizations that provide these services in states using the federally facilitated exchange. An issuer is an insurance company, insurance service, or insurance organization that is required to be licensed to engage in the business of insurance in a state. State-based exchanges are able to set their own budget and strategy for promoting exchange enrollment and set the length of their open enrollment periods. Metal tier of selected plan Bronze Household income Appendix IV: Selected Stakeholder Views of Factors Likely Affecting 2018 Enrollment in Healthcare.gov Plans We identified a list of factors that may have affected 2018 healthcare.gov enrollment based on a review of Department of Health and Human Services (HHS) information, interviews with health policy experts, and review of recent publications by these experts related to 2018 exchange enrollment. Using this list, we conducted structured interviews with officials from 23 stakeholder organizations to gather their viewpoints as to whether and how these or other factors affected 2018 health insurance exchange enrollment. Organizations interviewed were selected to reflect a wide range of perspectives and included HHS-funded navigator organizations that provide in-person consumer enrollment assistance, issuers, state insurance departments, professional trade organizations, research and advocacy organizations, and state-based exchanges. Table 2 displays a range in stakeholder views about the impact of these factors. Appendix V: HHS Paid Advertising Expenditures for 2017 and 2018 Open Enrollment Periods Appendix VI: Comments from the Department of Health and Human Services Appendix VII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Gerardine Brennan, Assistant Director; Patricia Roy, Analyst-in-Charge; Priyanka Sethi Bansal; Giao N. Nguyen; and Fatima Sharif made key contributions to this report. Also contributing were Muriel Brown, Laurie Pachter, and Emily Wilson.
Why GAO Did This Study Since 2014, millions of consumers have purchased health insurance from the exchanges established by the Patient Protection and Affordable Care Act. Consumers can enroll in coverage during an annual open enrollment period. HHS and others conduct outreach during this period to encourage enrollment and ensure the exchanges' long-term stability. HHS announced changes to its 2018 outreach, prompting concerns that fewer could enroll, potentially harming the exchanges' stability. GAO was asked to examine outreach and enrollment for the exchanges using healthcare.gov. This report addresses (1) 2018 open enrollment outcomes and any factors that may have affected these outcomes, (2) HHS's outreach efforts for 2018, and (3) HHS's 2018 enrollment goals. GAO reviewed HHS documents and data on 2018 open enrollment results and outreach. GAO also interviewed officials from HHS and 23 stakeholders representing a range of perspectives, including those from 4 navigator organizations, 3 issuers, and 6 insurance departments, to obtain their non-generalizable views on factors that likely affected 2018 enrollment. What GAO Found About 8.7 million consumers in 39 states enrolled in individual market health insurance plans offered on the exchanges through healthcare.gov during the open enrollment period for 2018 coverage. This was 5 percent less than the 9.2 million who enrolled for 2017 and continued a decline in enrollment from a peak of 9.6 million in 2016. Among the 23 stakeholders we interviewed representing a range of perspectives, most reported that plan affordability played a major role in exchange enrollment—both attracting and detracting from enrollment. In 2018, total premiums increased more than expected, and, as a result, plans may have been less affordable for consumers, which likely detracted from enrollment. However, most consumers receive tax credits to reduce their premiums, and stakeholders reported that plans were often more affordable for these consumers because higher premiums resulted in larger tax credits, which likely aided exchange enrollment. Stakeholders had mixed opinions on the effects that other factors, such as the impact of reductions in federal advertising and the shortened open enrollment period, might have had on enrollment. The Department of Health and Human Services (HHS), which manages healthcare.gov enrollment, reduced consumer outreach for the 2018 open enrollment period: HHS spent 90 percent less on its advertising for 2018 ($10 million) compared to 2017 ($100 million). Officials told us that the agency's approach for 2018 was to focus on low-cost, high-performing forms of advertising. HHS reduced funding by 42 percent for navigator organizations—which provide in-person enrollment assistance for consumers—spending $37 million in 2018 compared to $63 million in 2017 due to a shift in administration priorities. HHS allocated the funding using data that it acknowledged were not reliable in December 2016. The lack of quality data may affect HHS's ability to effectively manage the navigator program. Unlike in prior years, HHS did not set any numeric targets related to 2018 total healthcare.gov enrollment; officials told us that they instead focused on enhancing the consumer experience for the open enrollment period. Setting numeric targets would allow HHS to monitor and evaluate its overall performance, a key aspect of federal internal controls. Further, while HHS reported meeting its goal of enhancing the consumer experience, such as by improving healthcare.gov availability, it did not measure aspects of the consumer experience it had identified as key in 2017, such as successful outreach events. Absent a more complete assessment, HHS may not be able to fully assess its progress toward its goal of enhancing the consumer experience and may miss opportunities to improve other aspects of the consumer experience. What GAO Recommends GAO is making three recommendations to HHS, including that it ensure the data it uses for determining navigator organization awards are accurate, set numeric enrollment targets, and assess other aspects of the consumer experience. HHS agreed with two recommendations, but disagreed with the need to set numeric targets. GAO maintains that such action is important.
gao_GAO-18-637
gao_GAO-18-637_0
Background Credit Allocation and Cost Oversight Each state receives an annual LIHTC allocation. Allocating agencies then evaluate developers’ proposals to use tax credits to help develop new or rehabilitate existing housing against their QAPs. The QAPs identify agencies’ priority housing needs and contain selection criteria for awarding credits. In addition to meeting criteria outlined in a QAP, projects awarded tax credits must remain affordable to qualifying households for at least 30 years. The amount of LIHTCs allocating agencies award to a project is primarily based on the project’s eligible basis. The agencies should allocate no more credits than they deem necessary to ensure the project’s financial feasibility through the 10-year credit period. To determine financial feasibility, Section 42 requires allocating agencies to consider the reasonableness of developmental and operating costs, any proceeds or receipts expected to be generated through the tax benefit, and the percentage of credit amounts used for project costs other than the cost of intermediaries such as syndicators (discussed later in this section). Section 42 also requires allocating agencies to evaluate available private financing and other federal, state, and local funding a developer plans to use and adjust the award accordingly. Allocating agencies must review costs to determine the credit amount at three points in time: application (when the proposal is submitted), allocation (when the agency commits to providing credits to a specific project), and placed in service (when the project is ready for occupancy under state and local laws). When a project is placed in service, the developer must submit a final cost certification to the allocating agency. This certification details a project’s total costs and eligible basis. In general, the cost certification must be accompanied by an unqualified audit report from a certified public accountant, conducted in accordance with generally accepted auditing standards. An agency’s QAP (or related documents) may outline policies and procedures for reviewing costs. Investors and Project Financing Once a project is awarded tax credits, developers often attempt to obtain funding for the project by attracting investors willing to contribute equity financing. Developers typically sell an ownership interest in their LIHTC projects in exchange for equity from investors (a process commonly referred to as selling tax credits). The equity contributions (or investments) reduce debt burden on LIHTC projects, making it possible for project owners to offer lower, more affordable rents. Generally, investors buy an ownership interest in a LIHTC partnership (commonly referred to as buying tax credits) to lower their tax liability. Investors in LIHTC projects may invest directly or through intermediaries known as syndicators. Direct investors are typically larger institutional investors, such as banks that have the internal capacity to fund and manage the acquisition, underwriting, and management of the underlying development project. Under the direct investment model, an investor owns a “limited” partner interest in the partnership owning the underlying property, with the developer typically assuming the “general” partner interest (see fig. 1). Alternatively, investors may invest in a fund organized and managed by a syndicator. The syndicator-managed funds are limited partnerships in which investors own the limited partner interest in the fund (upper-tier partnership), with the fund in turn owning the limited partner interest in various property partnerships (lower-tier partnership). The money investors pay for a partnership interest in the fund is paid to associated LIHTC projects as equity financing. Syndicators manage two types of funds: proprietary (or single-investor) funds and multi-investor funds (see fig. 2). In both cases, the syndicator originates potential investments, performs underwriting, and presents the potential investments to investors. Syndicators receive a fee from investors—typically a percentage of the gross equity raised—for their services in establishing, originating, underwriting, and closing on projects for investment funds. This fee is often referred to as an “acquisition fee” or an “upper-tier syndication fee.” The syndicator also may charge a fee to each project partnership in a fund for project-specific legal and accounting costs. This fee is often referred to as a “lower-tier syndication fee.” LIHTC projects typically do not produce income through rents for investors. Rather, investors use the credits to offset their income tax liabilities over the 10-year credit period. As a result, for a LIHTC investment to be financially beneficial to an investor, the present value of 10 years of LIHTCs and any related benefits, such as taxable losses and depreciation, generally must exceed the amount the investor contributes in equity. This consideration, in part, drives the price investors are willing to pay for tax credits. Under normal economic conditions, equity pricing per tax credit has ranged from the $0.80s to mid-$0.90s per $1.00 of tax credit. Projects often require financing in addition to investors’ equity contributions to cover development costs. This gap may be filled by federal, state, local, and private sources—for example, certain HUD grants and loans, state tax credits modeled after the federal program, and mortgage loans without government guarantees. A developer also may defer its developer fee to cover all or a portion of a funding gap. Program Oversight IRS and allocating agencies jointly administer the LIHTC program, with other entities providing additional types of oversight, as follows. IRS administration of the LIHTC program includes developing and publishing regulations and guidance, enforcing taxpayer compliance, and overseeing allocating agencies’ monitoring of taxpayer compliance. The IRS Office of Chief Counsel, with assistance from Treasury’s Office of Tax Policy, develops and publishes regulations and guidance based on requirements in Section 42. In general, IRS collects and reviews information necessary for tax administration, including data on LIHTCs awarded and other information necessary to check the amount claimed on tax returns. According to IRS officials, IRS also regularly communicates with allocating agencies and stakeholders about LIHTC compliance issues and best practices at industry meetings and conferences. IRS relies on allocating agencies to administer and oversee the LIHTC program in states. In addition to awarding credits to qualified projects, allocating agencies are responsible for monitoring LIHTC properties for compliance with program requirements (for example, rent ceilings, tenant income, and habitability). Noncompliance with LIHTC requirements may result in IRS denying claims for the credit in the current year or recapturing (taking back) credits claimed in prior years. Investors and syndicators also monitor projects by performing due diligence in relation to their viability and eligibility for tax credits, in part to ensure they receive the expected tax credits. Although not an administering agency, HUD plays a role in collecting data on the program. Specifically, the agency has to collect information on LIHTC tenant characteristics, as mandated in the Housing and Economic Recovery Act of 2008. Since 1996, HUD voluntarily has collected LIHTC project-level data because of the importance of the credits as a source of funding for low-income housing. HUD also has a role in designating difficult development areas and qualified census tracts. In addition, NCSHA has identified recommended practices to allocating agencies for administering the LIHTC program, including oversight of QAPs and cost verification. LIHTC Project Costs Varied Widely, and Scale, Location, and Tenant Characteristics Explained Some Differences Median Cost of LIHTC Projects Was About $200,000 Per Unit, and the Range and Composition of Costs Varied by Construction Type The median per-unit cost of the LIHTC projects completed in our 12 selected allocating agency jurisdictions in 2011–2015 was $204,000. The median per-unit cost of new construction projects was about $50,000 higher than for rehabilitation projects ($218,000 compared to about $169,000). For new construction projects, the median per-unit cost was about $38,000 higher in urban areas than in nonurban areas (about $230,000 compared to $192,000). For rehabilitation projects, the median per-unit cost was about $72,000 higher in urban areas than in nonurban areas (about $196,000 compared to $124,000). The development costs we report may be somewhat understated, because the documentation we obtained from allocating agencies did not consistently include the value of all costs—for example, donated land— which we discuss later in this report. As shown in figure 3, the median per-unit LIHTC equity investment was about $147,000 for new construction projects (about 67 percent of the total development cost) and $103,000 for rehabilitation projects (about 61 percent of the total development cost). Other funding sources, such as private loans or state and local programs, made up for differences between project costs and equity investments. We estimated equity investments for the selected projects based on their LIHTC allocations and the reported prices investors paid for the credits. The median credit price increased from about $0.80 in 2011 to about $0.93 in 2015. Although rehabilitation projects generally had lower per-unit costs than new construction, both types of projects had similar proportions of hard and soft costs (see fig. 4). Hard costs (which include land, existing structures, and construction) were roughly 70 percent of new construction and rehabilitation project costs. Costs for acquisition of existing structures were proportionally higher and construction costs proportionally lower for rehabilitation projects than for new construction. Land costs were close in proportion. Soft costs (which include contractor fees, architect and engineer fees, developer fees, and other soft costs such as construction loan financing) were proportionally similar for new construction and rehabilitation projects—roughly 30 percent. Project Cost Trends Differed by Construction Type and Are Difficult to Compare to Market-Rate Projects In nominal terms, the median per-unit cost of new construction projects increased by about 13 percent during 2011–2015, and the median per- unit cost of rehabilitation projects decreased by about 21 percent. After accounting for inflation, the median per-unit cost for new construction projects increased by about 7 percent (from about $208,000 to $222,000 in 2015 dollars), while the median per-unit cost for rehabilitation projects decreased by about 26 percent (from about $207,000 to $153,000 in 2015 dollars). However, this analysis does not account for changes in the composition of projects that were built (such as size or location). In addition, the overall trends were substantially affected by certain allocating agencies. For example, California accounted for about 24 percent of the new construction projects in our sample. During 2011–2015, the median per-unit cost of California’s new construction projects increased by about 11 percent (about 18 percent in nominal terms), while the median per-unit cost of all other new construction projects in our sample decreased by about 4 percent (in nominal terms, increased by about 2 percent). Additionally, New York City accounted for about 19 percent of the rehabilitation projects in our sample, and the median per-unit cost of its projects declined by about 33 percent (about 32 percent in nominal terms) in 2011–2012. During this same period, the median per-unit cost of all other rehabilitation projects increased by about 13 percent (about 15 percent in nominal terms) but did not show a clear trend in 2011–2015. To provide some context for the project costs and trends discussed above, we compared the annual rates of change for median new construction costs—generally site work, construction materials and labor, and contractor fees—to the annual rates of change in a Bureau of Labor Statistics index for construction costs that tracks price changes for various types of new construction. The median per-unit construction cost of the LIHTC projects (unadjusted for inflation) and the index both increased over the analysis period—by 11 percent and 10 percent, respectively. However, while the index consistently increased annually by an average of about 2 percent, the magnitude and direction of changes for the LIHTC projects varied, increasing by as much as about 8 percent in 2013–2014 and decreasing by about 5 percent in 2014–2015. Figure 6 shows the annual median per-unit construction costs for new construction LIHTC projects and a projected trend if they had increased at the rate of the Bureau of Labor Statistics index beginning in 2011. These results suggest that factors besides the price of construction inputs (such as material, labor, and contractor fees) drove changes in the median cost of LIHTC projects completed during 2011–2015. Project locations and characteristics varied each year, and a number of these factors were associated with per-unit costs, as discussed later. To provide context for our cost analysis, we also examined the feasibility of comparing LIHTC development costs to development costs for market- rate projects. However, we were unable to obtain data on market-rate developments from industry groups we contacted that represented developers and lenders, or from researchers who had conducted similar studies. Additionally, allocating agencies did not consistently maintain key project data—such as gross square footage, number of stories, or construction wages—needed to benchmark LIHTC project costs using a construction cost estimation tool. We discuss these and other data challenges in greater detail later in this report. Nonetheless, several factors provide possible explanations for why construction costs, developer fees, and other soft costs may differ between LIHTC and market-rate projects: Durability. LIHTC project developers may have incentive to use more durable (and potentially more expensive) construction components than they might for market-rate developments. They may seek to limit replacement costs before the end of the 15-year compliance period— after which they may seek additional LIHTCs for rehabilitation or convert units to market-rate. As revenue from tenant rents is generally lower for LIHTC projects than for market-rate projects, and because investors prefer not to refinance during the 15-year compliance period and lower their returns, LIHTC project owners are more limited in their ability to recapitalize aging projects. On the other hand, market forces may encourage market-rate developers to provide higher-grade finishes and amenities than LIHTC developers in some markets. Agency and local requirements. Allocating agencies can use QAP minimum standards and scoring incentives to influence the types of projects developers propose and build. Although these preferences can help achieve a variety of policy priorities, some can increase costs. For example, QAPs may provide developers with incentives to pursue historic preservation projects or require them to add on-site commercial space or amenities such as community rooms. Green building and energy-efficiency standards are also common QAP incentives that can increase development costs, although they may offset some future operating costs through lower utility expenses. Some QAPs also may incentivize urban infill projects on sites that require extensive demolition or environmental remediation, which add to costs. Profit motive. LIHTC projects may be less attractive financially for developers than market-rate projects because they yield lower profits from rental income. Accordingly, allocating agencies allow a developer fee, for which tax credit equity generally pays. For the projects in our sample, developer fees represented about 11 percent of development costs at the median. In comparison, market-rate developers are generally compensated through rental income or from the sale of their developments. Other soft costs. LIHTC projects may have higher soft costs (other than developer fees) compared to market-rate and other types of affordable developments for a number of reasons, including the following: Financing projects through LIHTC equity is a complex process that can result in higher legal, accounting, and syndication fees and can also require developers to hire outside consultants and develop sophisticated internal capacity. LIHTC developers also generally rely on multiple public and private funding sources in addition to tax credit equity to fully finance projects. For example, projects in California used about six funding sources in addition to LIHTC equity, on average. These additional sources can increase legal, accounting, and other fees due to the costs associated with seeking additional sources, writing applications, and complying with further appraisal, audit, and regulatory requirements. Securing additional funding sources also can delay the development process, which may increase land holding and interest expenses. LIHTC Project Costs Varied across Selected Allocating Agencies As shown in figure 7, the median per-unit cost of new construction projects across the 12 selected allocating agencies ranged from a low of about $126,000 in Texas to a high of $326,000 in California. The median per-unit cost was less than $200,000 for 4 of the 12 allocating agencies (Arizona, Georgia, Ohio, and Texas); from $200,000 to $300,000 for 6 of the 12 allocating agencies (Florida, Illinois, New York, New York City, Pennsylvania, and Washington); and greater than $300,000 for 2 of the 12 agencies (Chicago and California). Median per-unit costs for rehabilitation projects were lower and varied less than those for new construction projects, ranging from a low of about $107,000 in Illinois to a high of about $258,000 in both Chicago and New York. In all selected allocating agencies, the median per-unit cost for rehabilitation projects was lower than for new construction projects. For example, the median in California was about $184,000, compared to about $326,000 for new construction. For additional details on the cost of rehabilitation projects, see appendix III. As also shown in figure 7, within individual allocating agencies, the cost difference between the least and most expensive project was as little as $104,000 per unit (Georgia) and as much as $606,000 per unit (California). Project costs tended to be clustered around the median for each allocating agency, but were still widely distributed between the 25th and 75th percentiles for some allocating agencies. For example, the difference between the 25th and 75th percentiles was more than $75,000 in half of the locations we reviewed (California, Chicago, Illinois, New York, New York City, and Pennsylvania). Although projects costs were among the highest for the Chicago and New York City allocating agencies, they were within the range of costs for five other cities that had comparable population and density and were in the jurisdictions of other allocating agencies within our sample (see fig. 8). Hard costs as a proportion of total development costs varied among the selected allocating agencies. Agencies’ hard costs ranged from about 66– 76 percent for new construction projects completed in 2011–2015, with soft costs accounting for the remainder (see fig. 9). The proportions of hard and soft costs were generally similar across higher- and lower-cost locations. For example, California had the highest median per-unit cost among selected allocating agencies, but had hard and soft costs (about 67 and 33 percent) proportionally similar to those in Texas (about 68 and 32 percent) and Georgia (about 69 and 31 percent), where median per- unit costs were among the lowest. In relation to hard costs, median per-unit construction costs were highest in Chicago, where construction costs constituted about 72 percent of total development costs (but were about 63 percent elsewhere, on average). In comparison, construction costs in California were just 56 percent of total development costs due to higher land costs (about 12 percent of total development costs, but about 5 percent elsewhere, on average). For soft costs, developer fees and other soft costs (such as construction loan interest and permit fees) varied more widely across the allocating agencies than architect and engineer fees and contractor fees. Developer fees ranged from about 6 percent of development costs in Chicago to about 13 percent of development costs in Florida. Other soft costs similarly ranged from about 7 percent of development costs in Pennsylvania to about 14 percent of development costs in California. In comparison, architect and engineer fees ranged from about 3 percent to 5 percent of development costs, and contractor fees ranged from about 5 percent to 9 percent of development costs. Scale, Location, and Other Characteristics of LIHTC Projects Explained Some Cost Differences By design, the LIHTC program gives allocating agencies flexibility to address local housing needs and agency priorities through their award processes. As a result, the characteristics of each agency’s LIHTC projects generally can be expected to reflect the real estate conditions, built environment, and populations of the areas they serve. For example, in locations with less density and inexpensive land, low-rise multibuilding developments may be more cost-effective, while in locations with higher density and expensive land, taller single-building developments may be more cost-effective. Therefore, it is important to consider the cost reasonableness of LIHTC developments within the context of local conditions. As previously noted, we developed a regression model to examine the relationship between the cost of developing LIHTC projects and various building, location, and other variables. Our model results indicate that a number of key characteristics were associated with significant increases or decreases in the per-unit costs of LIHTC projects that received tax credit awards from our selected allocating agencies. Differences in the prevalence of these characteristics among the allocating agencies help explain the cost variation among and within them. While our results indicate that these characteristics may have directly or indirectly affected per-unit cost, their specific effects varied by allocating agency, suggesting that our estimates are sensitive to the particular conditions of the locations we sampled. First, construction type (new construction or rehabilitation) and scale (number of units and unit size, measured by number of bedrooms)—were associated with cost, controlling for other characteristics. Construction type. We previously noted that the median per-unit cost for new construction was about $50,000 higher than the per-unit cost for rehabilitation projects, but after controlling for other characteristics, we estimated this difference to be $39,000. New construction projects were more costly than rehabilitation projects because they had higher construction costs (primarily site work, materials, and labor). For perspective, $39,000 represents about 19 percent of the median per-unit cost ($204,000) of projects in our sample. Number of units. In general, we found that per-unit costs decreased as the number of units in a project increased, consistent with economies of scale in construction. Specifically, we estimated that the per-unit cost of projects with more than 100 units was about $85,000 less than projects with fewer than 37 units (see fig. 10). In addition, we estimated that the per-unit cost of projects with 37–50 or 51–100 units was about $31,000 or $56,000 lower, respectively, than projects with fewer than 37 units. However, due to data limitations, our analysis does not account for building type—for example high-rise or low-rise structures—that may have affected per-unit cost. To account for some variation in building type, we compared projects with one or more larger buildings (60 or more units) to projects with more typical building designs. We found that the per-unit cost of projects with larger buildings—which were also taller on average—was about $15,000 more (about 7 percent of the median per- unit cost). This difference may be attributable to specific design requirements of larger and taller structures, such as construction materials and sprinkler systems. Unit size (number of bedrooms). As would be expected when comparing costs on a per-unit basis, we estimated that projects with larger units had higher per-unit costs. We estimated that the per-unit cost decreased by about $2,000 (or about 1 percent of the median per-unit cost) as the number of units with fewer than two bedrooms increased by10 percent. Conversely, the per-unit cost increased by about $3,000 as the number of units with more than two bedrooms increased by 10 percent. Second, we also found that the types of organizations that developed LIHTC projects and the tenants they targeted were associated with per- unit cost, after controlling for other characteristics. Tenant type. We estimated that the per-unit cost of projects targeted to seniors was about $7,000 lower than nonsenior projects (or about 3 percent of the median per-unit cost). Compared to nonsenior projects, units in senior projects generally had less residential square footage (for which we did not control), which may help explain their lower per-unit costs. Target income level. We also estimated that the per-unit costs of projects targeted to predominantly low-income tenants was about $11,000 more than for mixed-income projects (or about 5 percent of the median per-unit cost). Mixed-income projects might be expected to have higher costs as they generate more rent revenue to support higher development costs. But, because LIHTC allocations are calculated based on the ratio of low-income units to total units, predominantly low-income projects receive proportionally more LIHTC equity, which may allow them to support higher development costs. For example, we estimated that projects targeted towards predominantly low-income tenants generated LIHTC equity equal to about 67 percent of development cost, whereas mixed-income project generated LIHTC equity equal to about 50 percent of development cost. Nonprofit participation. Section 42 requires a portion of each state’s tax credit allocation to be set aside for projects involving a qualified nonprofit organization. We estimated that the per-unit cost of these projects was about $15,000 more than projects not in the set-aside (or about 7 percent of the median per-unit cost). Other studies of the LIHTC program have suggested potential explanations for this result. For example, nonprofit organizations may focus more on populations that are more costly to serve, such as special-needs tenants who may require additional or enhanced facilities. Additionally, nonprofit developers may have higher costs because they are often smaller, produce fewer projects, and may need to spend more time and resources on activities such as fundraising and market research, compared to their for-profit counterparts. Third, controlling for other characteristics, we found that a number of geographic and economic variables were associated with cost differences. Location. We estimated that urban locations were associated with a per- unit cost about $13,000 higher than for suburban locations (or about 6 percent of the median per-unit cost), and that per-unit costs in rural areas were not statistically different from suburban areas. Consistent with this estimate, the data in our sample show that per-unit land and construction costs were greater in urban areas than in nonurban areas. In addition, urban projects were more likely to include parking structures, which we found were associated with a per-unit cost increase of about $56,000 in California and Arizona (or about 27 percent of the median per- unit cost), where parking structure data were available. Among these projects, about 98 percent of projects with parking structures were in urban areas. Urban projects were also located in closer proximity to transit, which we found increased per-unit construction costs. In an alternative specification of our model limited to projects near fixed-guideway transit stations, we estimated that the per-unit construction costs of projects that were 0.5 miles or less from a transit station—known as transit-oriented developments—were about $17,000 more than projects that were between 0.5 miles and 1.0 miles from a transit station. Local housing market and economy. As discussed previously, difficult development areas are those with high construction, land, and utility costs relative to area median gross income; qualified census tracts are areas with higher rates of low-income households or poverty rates. We did not find that projects in these areas were associated with cost differences compared to projects outside these areas. However, we found cost differences among projects in difficult development areas and qualified census tracts when we estimated alternative specifications of our model that excluded some geographic, economic, and local housing market variables that may be associated with the areas and tracts. For example, using a model specification that excluded local property values, we estimated that difficult development areas were associated with about a $9,000 increase in per-unit costs. In a separate estimation that excluded poverty rates and some other economic and geographic variables, we estimated that projects in qualified census tracts were associated with a per-unit cost increase of about $18,000 (or about 9 percent of the median per-unit cost). In both cases, the project characteristics of interest (difficult development area or qualified census tract) are likely associated with the excluded variables mentioned, as difficult development areas are characterized by high land costs and qualified census tracts are characterized by high poverty rates, among other factors. In the absence of the excluded geographic or local housing market variables, the estimated influence of these project characteristics is more pronounced. Finally, we found that the presence of federal funding sources in addition to LIHTC were associated with cost differences, after controlling for other characteristics. American Recovery and Reinvestment Act funding. We estimated that projects that received funding through either of two LIHTC programs (Tax Credit Assistance Program or Section 1602 Program) under the American Recovery and Reinvestment Act of 2009 (ARRA) were associated with a decrease of about $13,000 in per-unit costs (or about 6 percent of the median per-unit cost). Projects received ARRA funds during a period of economic recovery, and the relative scarcity of private funds may have motivated developers to pursue less costly projects. Because about 91 percent of projects that received ARRA funds were completed in 2011– 2012, we restricted our ARRA estimate to projects completed in that period. We estimated that soft costs were about $4,000 per unit lower for ARRA projects than for non-ARRA projects. Soft costs, which we previously mentioned were about one-third of total development costs, may have been lower for ARRA projects because proportionately fewer of these projects used tax credit equity to fund development costs. For example, about 30 percent of these projects received ARRA funds entirely in lieu of tax credits. As a result, ARRA projects may have had lower or no tax credit partnership and syndication costs. However, we did not estimate a significant difference in construction costs between ARRA and non-ARRA projects. Rural Development funding. Projects that received at least one Rural Development loan or grant, from the Department of Agriculture, were associated with about a $32,000 decrease in per-unit cost (or about 16 percent of the median per-unit cost). However, projects that received these loans or grants may have had unique characteristics that affected cost. According to an allocating agency official from California—where about 19 percent of the projects we reviewed used at least one Rural Development loan or grant—projects that received these funds may have had lower total development costs because high-cost projects were not financially feasible in some rural areas due to lower rents and less local public funding. In addition, projects to house seasonal farm workers that receive funding from Rural Development’s Section 514/516 Farm Labor Housing programs may lack some amenities—such as in-unit kitchens and bathrooms—that increase costs and are more common in other LIHTC projects. Furthermore, private loans guaranteed through Rural Development’s Section 538 Guaranteed Rural Rental Housing Program are subject to per-unit limits, which may have hindered the feasibility of higher-cost projects. Other federal funding. We also estimated that projects that received HOPE VI funds were associated with about an $18,000 increase in per- unit costs (or about 9 percent of the median per-unit cost). However, the cost increase that we estimated may not have fully captured all additional costs associated with these projects. Several of the 23 HOPE VI projects included in our sample were phases of larger HOPE VI Revitalization Grant projects and may have included only the project costs associated with a smaller portion of a multibuilding development. In addition, some predevelopment expenses associated with the overall grant project, such as the demolition of existing structures and tenant relocation, may not have been included in the cost certifications we reviewed. In contrast to the HOPE VI projects we reviewed, we did not find that projects that received Community Development Block Grant (CDBG) or HOME Investment Partnerships Program (HOME) funds had statistically different per-unit total development costs. However, like HOPE VI projects, CDBG and HOME projects were associated with increases in per-unit construction costs (about $15,000 or $6,000, respectively). The presence of HOME funds also was associated with an increase in per-unit soft costs (about $2,000), while CDBG or HOPE VI funds were not strongly associated with differences in per-unit soft costs. While these sources were associated with cost differences, controlling for other characteristics, the association may not be entirely causal. The use of CDBG, HOME, and HOPE VI funds may have directly increased construction costs, as fund usage can trigger federal prevailing wage requirements. On the other hand, CDBG and HOME funding (for example) may have been used in addition to LIHTC equity to fill funding gaps for projects with particularly high costs. Finally, to examine the relationship our model characteristics had on the per-unit cost of low- and high-cost projects, we compared the characteristics of new construction projects below the 25th percentile for per-unit cost against those above the 75th percentile. As shown in table 1, projects below the 25th percentile generally had a higher proportion of characteristics that were associated with decreases in per-unit cost. These projects were larger, had smaller units, were more often targeted toward seniors, and were located in rural areas. In comparison, projects above the 75th percentile generally had a higher proportion of characteristics associated with increases in per-unit cost (or less of a decrease). These projects were smaller, had larger units, were more often located in urban areas, and were built in more expensive real estate markets, as the following examples illustrate. About 70 percent of the projects below the 25th percentile had either 51–100 units or more than 100 units—which we found were associated with lower per-unit cost—compared to just 46 percent of the projects above the 75th percentile. About 40 percent of the projects below the 25th percentile were senior projects—which we also found were associated with lower per-unit costs—compared to 18 percent for projects above the 75th percentile. About 88 percent of the projects above the 75th percentile were in urban areas—which we found were associated with higher per-unit costs—compared to 71 percent of the projects below the 25th percentile. Allocating Agencies Took Steps to Manage and Verify Development Costs, but LIHTC Policies Do Not Require Detailed Cost Information Allocating agencies used approaches that include cost and fee limits and cost-based scoring criteria to manage project-development costs. A few agencies adopted additional measures such as detailed contractor certifications at project completion to help guard against a risk of fraud involving misrepresentation of contractor costs, but LIHTC policies do not require these enhancements. The 57 Allocating Agencies Managed Development Costs through Approaches That Included Cost and Credit Limits, Fee Limits, and Scoring Criteria As shown in table 2, the eligibility requirements and scoring systems that the 57 allocating agencies used to evaluate credit applications generally included approaches that seek to limit development costs or incentivize lower costs. For information on the approaches each of the agencies used, and in what combination, see appendix VI. The types and number of cost-management approaches employed by each agency varied, as illustrated in table 3. More than one-third of the agencies used all four types of cost-management approaches we identified (one or more cost limits, credit allocation limits, fee limits, and cost-based scoring criteria). In contrast, a few agencies used just one type of approach. The number of approaches used by an agency is not necessarily indicative of the effectiveness of its cost management. Additionally, the way that agencies implemented each type of approach varied. The cost-management approaches agencies identified in their QAPs and related documents were as follows. Cost limits. More than two-thirds of the allocating agencies (39 of 57) set limits on the total development cost for each project or set limits on the total eligible basis (or both). Total development cost is the overall cost to develop a project, whereas eligible basis typically includes costs associated with acquisition, construction and rehabilitation, and most soft costs, but excludes costs associated with land, permanent financing, and tax credit syndication. For information on cost limits for each of the 57 agencies, see appendix VI, table 32. Thirty-three agencies set limits on the total development cost for each project. For example, Illinois limited total costs by bedroom type, number of units, and location, based on the agency’s analysis of historical cost data. Ten agencies set cost limits on a project’s eligible basis, and their approaches to these limits varied. For example, two agencies adopted universal eligible basis limits of $250,000 per unit (Pennsylvania) and $300,000 per unit (New York City), whereas most others had multiple limits based on project characteristics such as type (new construction or rehabilitation), number of bedrooms, and location. Six agencies, including Georgia, applied cost limits from a HUD program that insures mortgages for rental housing for moderate- income families. According to Georgia officials, adopting the HUD limits was more cost-effective than developing cost limits based on a market analysis. Credit allocation limits. About two-thirds (34) of the allocating agencies had limits on the amount of LIHTCs available, generally per project or per developer, and the limits varied by type and amount. For information on credit allocation limits for each of the 57 agencies, see appendix VI, table 33. Twenty-nine agencies had allocation limits per project, which included dollar limits (from $500,000 to $2.5 million) and percentage limits (from 10 percent to 60 percent of an agency’s total available credits per project), and two of these agencies also had a per-unit limit. For example, Illinois limited credits per project to the lesser of $1.5 million or 28,500 credits per unit. California limited credits per project to $2.5 million, and Washington limited credits to 10 percent of the agency’s total available credits. Fourteen agencies had credit limits per developer or for the number of projects a developer can sponsor in a given year. One of these agencies also had a per-unit limit. The developer credit limits included dollar limits (from about $1.2 million to $3 million per developer) and percentage limits (from 10 percent to 25 percent of the agency’s total available credits). For example, Pennsylvania limited credits to $1.2 million per developer, and Washington limited developers to 15 percent of the agency’s total LIHTCs and two projects per application round. Another agency limited the number of projects (two) a developer can sponsor in a given year. Fee limits. Fifty-one agencies limited developer fees and 47 also limited contractor fees. The agencies’ approaches to developer and contractor fee limits varied. As for other limits, 14 agencies limited fees for other project team members such as architects. For information on fee limits for each of the 57 agencies, see appendix VI, table 34. Twenty-seven agencies had a flat limit on developer fees based on a percentage of the total development cost (typically 15 percent, although percentages ranged from 8 percent to 20 percent), while two others had dollar caps ($13,000 and $18,000 per unit). Twenty-one agencies set tiered limits for developer fees based on the number of units in or cost of the project. For example, Arizona and Texas based their two- and three-tiered limits on the number of units in a project. Chicago and Illinois had tiered percentage limits based on a project’s development costs. Twenty-five agencies had separate developer fee limits for acquisition costs, ranging from 4 percent to 15 percent, or tiered limits based on development costs. Fourteen agencies set dollar caps on the total fees developers could receive per project, ranging from $1 million to $3.75 million. Twenty-seven agencies also limited fees earned by related-party developers and contractors. For example, Pennsylvania set a related-party developer fee limit (12 percent) lower than its developer fee limit (15 percent). Illinois required related-party developers to reduce their fees by their related general contractor’s profit. Cost-based scoring criteria. A large majority (51) of the allocating agencies used a competitive scoring process that incorporated one or more cost-based criteria to award LIHTCs. For information on cost-based scoring criteria for each of the 57 agencies, see appendix VI, table 35. Twenty-four agencies awarded points to projects with costs under an agency’s limits. For example, Washington awarded points to projects for which the developer fee was below the agency’s limit of 15 percent. Eighteen agencies awarded points to projects with comparatively lower costs. For example, New York City awarded points to projects with costs below the median total development cost of all submitted applications. Eleven agencies awarded points to applications for credit efficiency, which many of the agencies measured by the dollar amount of credits requested relative to the number of units proposed. For example, Ohio awarded a sliding scale of points to projects based on the ratio of the credits requested to the proposed number of units, with lower ratios (representing greater credit efficiency) earning more points. Three agencies’ competitive scoring criteria included penalties for developers with poor past cost performance. For example, they awarded negative points to developers that exceeded cost limits or provided incomplete cost information for previous projects. In addition, 35 agencies included a cost-based criterion in their application scoring tiebreakers. For example, Arizona included a credit efficiency criterion as a tiebreaker. Other cost-related approaches (12 selected agencies). Through our interviews and review of documentation, we also identified several other steps that our 12 selected allocating agencies took to manage LIHTC project costs at application and during construction. Officials from two agencies (Georgia and Ohio) told us that their cost- reasonableness reviews included identifying high-cost outliers. For example, Ohio replaced its total development cost limit with a process for identifying and removing from consideration projects with the highest total development costs compared with other competing applications. Chicago and Florida officials said they required or encouraged a bid process for selecting contractors or subcontractors. Florida officials told us that competitive selection of subcontractors, rather than using related-party subcontractors, provided cost transparency and could lead to lower costs. Similarly, New York City officials told us that nearly all the agency’s LIHTC projects received funds from a city subsidy loan program that can require competitive selection of contractors, and the agency reviewed each contractor bid for cost reasonableness. Illinois required third-party cost reviews of some projects as part of its cost-reasonableness review. Projects with related parties and all rehabilitation projects had to provide a construction cost breakdown completed by an independent third party. Additionally, Georgia’s QAP provided discretion to the agency to require a third-party cost review as needed. According to officials from 11 of the 12 agencies, policies they used to discourage cost increases during construction included restrictions on change orders, such as by requiring agency approval and documenting a project’s cost increases (8 agencies); requiring developers or general contractors to pay for cost increases using contingency funds, profits, or other sources of funding (10 agencies); and penalizing developers for cost increases in future application rounds (5 agencies). Nine of the 12 selected agencies conducted site inspections directly or by a third party to monitor construction progress, ranging from one visit to biweekly site visits. For example, New York officials said they conducted regular and unannounced site visits. Officials from the other 3 agencies said they did not conduct site visits and relied on other public funding partners, private lenders, developers, and syndicators to monitor projects during construction and in some cases, provide monitoring reports for the agency’s review. Although officials from many of the selected allocating agencies acknowledged the importance of managing LIHTC development costs, for the most part agencies have not determined the specific cost effects of their approaches. A June 2016 report by Enterprise Community Partners recognized the complexity of assessing the cost implications of individual agency actions, while also noting that the wide range of agency approaches represented an opportunity for experimentation, innovation, and sharing of leading practices. The report recommended that as agencies establish goals and make changes to QAPs, they should regularly evaluate cost trends and outcomes. But as discussed later in the report, limitations in the cost-related data allocating agencies collect and the format in which they maintain them have hampered such evaluation. Some Allocating Agencies Have Enhanced Cost- Verification Requirements to Manage a Fraud Risk, but LIHTC Policies Do Not Require It While a few allocating agencies have implemented additional cost- certification controls—such as contractor-level certifications—to help address the risk of fraud involving misrepresentation of contractor costs, there are no LIHTC requirements to do so. Rather, allocating agencies oversee costs at project completion by reviewing final developer cost certifications. LIHTC regulations require developers of projects with more than 10 units to submit a cost certification, which includes total project costs and eligible basis, to the allocating agency and for the certification to be audited by a certified public accountant. As illustrated in figure 11, developer cost certifications do not break out specific contractor costs; rather, they aggregate contractor costs into several broad categories. While the extent of fraud in the LIHTC program is not known, federal legal actions involving LIHTC projects in Florida highlight the risk of unscrupulous developers, contractors, and subcontractors inflating costs and obtaining excess program resources for personal financial gain. For example, according to the Department of Justice’s U.S. Attorney’s Office for the Southern District of Florida: Several developers and contractors conspired in a contract inflation scheme affecting numerous LIHTC projects. The scheme involved submitting fraudulently inflated cost information to the allocating agency, resulting in $36 million in excess LIHTCs and federal grants. Seven individuals pled guilty and received sentences that included forfeiture of fraudulently obtained funds and for three individuals, prison time. In another scheme affecting four LIHTC projects, developers working with a related-party contractor and subcontractor submitted fraudulently inflated cost information to the allocating agency. Under a prosecution agreement, the subcontractor has paid $5.2 million in forfeiture and fines. But only a limited number of allocating agencies—5 of the 12 we selected and at least 4 of the remaining 45 agencies—have additional cost- certification controls to help address the risk of fraud involving misrepresentation of contractor costs. These controls are outlined in the agencies’ QAPs. Agencies outside of the 12 we selected for more detailed review could have requirements beyond what appears in their QAPs. However, two national accounting firms with LIHTC practices confirmed that, as of early 2018, a limited number of allocating agencies had implemented controls to address the risk of fraud involving misrepresentation of contractor costs. Of the 12 selected agencies, 4 required general contractor cost certifications, which provide information that can be used to corroborate costs listed in developer cost certifications (see fig. 12). More specifically, Florida and Ohio required general contractor cost certifications for all projects, and Arizona and Georgia required cost certifications only from related-party general contractors. In addition, California required auditors performing developer cost certifications for projects with related parties to audit to the level of the subcontractor. According to one national accounting firm, this may involve examining source documents from subcontractors (such as invoices, fee agreements, contracts, or deeds) to verify consistency with construction line items in the developer cost certification. Among the 45 remaining agencies, Delaware, Kentucky, Michigan, and Missouri had QAPs that required general contractor cost certifications for all projects. None of the 45 agencies’ QAPs cited a requirement for cost certifications for related-party general contractors. Officials from a few of the 12 selected agencies and a LIHTC accounting firm told us that unrelated parties also may present a fraud risk. The LIHTC development community is small in some markets, and unrelated developers and contractors may work together repeatedly. These relationships may pose risks similar to related-party relationships by increasing opportunities to collude in misrepresenting costs. Requiring information beyond the developer cost certification provides greater cost transparency, which may help to deter or detect misrepresentation of costs. Federal LIHTC regulations do not require developers to provide contractor- or subcontractor-level cost information to LIHTC allocating agencies, or for auditors to verify the consistency of these costs with the developer cost certification. As a result, the regulations do not fully address the risk of fraud involving misrepresentation of contractor costs. Federal internal control standards state that management should consider the potential for fraud when identifying, analyzing, and responding to risks. IRS and Treasury officials told us they have not considered implementing changes to the cost-certification requirement and that neither allocating agencies nor industry groups had suggested to them that the existing regulation needed clarification. They suggested that allocating agencies could enhance the requirement at their discretion. In contrast, NCSHA revised its recommended practices for allocating agencies in 2017, advising that agencies should require additional cost certification due diligence for all housing credit developments. According to NCSHA, this additional due diligence may include audits of general contractors—alone or with an additional review of a sampling of subcontractor invoices—to verify consistency with the developer cost certification. However, NCSHA’s recommended practices are voluntary and it remains to be seen how many agencies implement these enhanced measures and in what form. Moreover, NCSHA, a national accounting firm, some developers, and several of the selected allocating agencies told us that additional cost- certification requirements can provide more detailed cost information and help deter fraud by providing more cost transparency to allocating agencies and auditors. Two of these allocating agencies estimated that requiring general contractor cost certifications could increase project costs by about $5,000–$15,000. NCSHA and two other selected agencies noted that additional cost certification requirements would not significantly increase project costs. Under the existing federal cost certification requirement—which stops at the developer level—the vulnerability of the LIHTC program to a known fraud risk is heightened, particularly in states in which allocating agencies have not implemented additional cost certification measures. Weaknesses in Data Quality and Federal Oversight Constrain Assessment of LIHTC Costs Data Limitations Hinder Detailed Evaluation of LIHTC Development Costs Data limitations, including inconsistencies among allocating agencies in the collection, definition, and format of key variables, constrain analysis and oversight of LIHTC development costs. While we were able to provide a cost analysis earlier in this report, our analysis was limited to those variables we were able to consistently collect and that were similarly defined across the selected allocating agencies. LIHTC regulations require developers to submit cost certifications to allocating agencies and the agencies to evaluate all sources and uses of funds for each project. However, IRS does not specifically require allocating agencies to collect and report cost-related data that would facilitate programwide assessment of development costs. IRS officials said that doing so would be inconsistent with their authority and role, which is focused on taxpayer compliance rather than program evaluation. As a result, allocating agencies have flexibility in what cost-related data to collect, how to maintain these data, and how to define variables for purposes of program evaluation. Our tax expenditure evaluation guide suggests federal agencies assess (determine and define) what data are needed to evaluate tax expenditures. Without standardized, accessible data on LIHTC development costs, federal agencies and credit allocating agencies cannot rigorously assess the factors that drive costs, the reasonableness of costs, and the efficiency of LIHTCs in producing affordable housing. Currently, no standards exist for collecting and maintaining data related to LIHTC project costs. Agencies Inconsistently Collected or Defined Key Variables In conducting our evaluation of LIHTC development costs, we aimed to collect data that would allow us to assess costs associated with federal preferences for LIHTC developments outlined in Section 42; assess costs associated with certain allocating agency preferences, which we identified through a literature review and interviews with selected industry groups; and compare LIHTC development costs to market-rate development costs, a potentially useful step in assessing the reasonableness of project costs as required under Section 42. Comprehensive information about project costs and characteristics is needed to conduct such an evaluation. However, inconsistencies in allocating agencies’ collection or definition of certain variables complicated our efforts to estimate statistical associations with costs, as follows. Developer characteristics. Allocating agencies did not maintain information on developers in a manner that readily permitted classification by for-profit or nonprofit status. We estimated the association between nonprofit status and development costs based on projects that received credits under nonprofit set-asides. A limitation of this approach is that it does not account for projects with nonprofit developers that received credits apart from the set-asides. For example, almost 80 percent of Washington’s projects in our sample had a nonprofit developer, but only 32 percent received credits under the nonprofit set-aside. Additionally, allocating agencies maintained tax identification numbers that would allow them to assess the influence of developer experience or incumbency—that is, how frequently a developer is awarded credits—on costs. But this information was not part of our data set, and we found that alternative variables (such as developer name) were unreliable for purposes of conducting a similar analysis. Tenant type. Allocating agencies identified and defined tenant types differently, partly as a result of their specific QAP priorities. For example, New York defined 39 distinct tenant types and Texas defined 2 (family and elderly). Consequently, we could not standardize tenant types across agencies and estimate associations with development costs, other than for projects targeted to seniors, a population for which there is a specific federal definition. Energy efficiency. Among our 12 selected allocating agencies, only California, Florida, and Texas collected information needed to assess the influence of energy-efficiency features on project-development costs. This information generally took the form of whether a project received a Leadership in Energy and Environmental Design (LEED) certification, a component of which is energy efficiency. Payment of prevailing wages. Some states also may require the payment of prevailing wages (generally, the hourly wage and benefits paid to the majority of workers in a particular area). In addition, certain federal funding sources commonly used as gap financing in LIHTC projects require the payment of prevailing wages. However, the agencies in our sample did not consistently capture information on whether projects paid these wages. Proximity to transit or other amenities. Most of the selected allocating agencies required or awarded points to projects located near certain amenities such as grocery stores, hospitals, or public transit. However, none maintained readily accessible data indicating which completed projects had this characteristic. Therefore, to estimate statistical associations between a development’s proximity to transit and development costs, we merged project address information with federal and local transit data. We were not able to estimate associations between other amenities and development costs. Square footage. Four of the 12 selected allocating agencies independently determined, or provided us with information we could use to calculate, the gross square footage of projects. Construction cost per gross square foot is a commonly used measure in the construction industry and useful for comparing LIHTC project costs to construction industry benchmarks. Additionally, because it encompasses the entire size of the structure, this measure relates project cost to project scale more precisely than other common measures, such as cost per unit and cost per residential square foot. Building type. The selected allocating agencies varied in how they defined and classified building types—such as single-family, multifamily, high-rise, mid-rise, or low-rise. As previously discussed, we classified projects generally based on the number of units and number of buildings they contained because data inconsistencies precluded more precise classifications. Number of residential and nonresidential buildings. All of the selected allocating agencies collected data on the number of residential buildings in each project, but only five collected data on the number of nonresidential buildings. As with gross square footage, this information would allow cost assessments based on a project’s entire physical footprint. Additionally, this information would allow agencies to refine per- unit cost measures by subtracting the cost of nonresidential spaces (for example, community or other common areas) from per-unit cost totals. Primary construction materials. The project documents we reviewed from the selected allocating agencies generally did not include data on the primary construction materials (for example, steel, concrete, brick, or wood). Including this information in data maintained on completed projects would help better explain cost variances between otherwise similar projects (for example, a 3-story building constructed with brick versus a 3-story building constructed with wood). This information is similarly useful for comparing LIHTC project costs to construction industry benchmarks. Number of stories per building. A few agencies, including Arizona, California, and Texas, collected data on the number of stories per building in each of their projects. As previously discussed, development costs may increase for taller structures due to design requirements. As a result, data on the number of stories would facilitate cost comparisons across similar structures and assessment of costs against construction industry benchmarks. Total syndication expenses. As discussed later in this report, none of the selected allocating agencies collected information on total tax credit syndication expenses. This information is necessary for understanding the cost of developing affordable-housing projects with LIHTCs. Agencies Maintained Data in Different Formats We also found that the 12 allocating agencies maintained cost-related LIHTC data in a variety of formats, ranging from paper records or electronic files for individual projects to electronic spreadsheets with information on multiple projects, as shown in the following examples. Illinois provided us with scanned copies of paper applications and cost certifications for each project. California provided us with a mix of scanned copies of paper and electronic applications and cost certifications for individual projects. Ohio provided us with a consolidated (or single) electronic spreadsheet containing line-item costs for all projects. This variation made it difficult to efficiently collect the data and put them in a format suitable for analyzing cost trends and drivers. To create a data set suitable for analysis, we manually entered data for 1,356 projects with paper files and consolidated data from spreadsheets using statistical software for 493 projects. Agencies did not collect data using standardized cost categories for analysis. As a result, we met with individual allocating agency officials to define each variable and ensure that we consistently categorized data across the agencies. Some examples of differences in how the data were defined include the following: New York City did not separate construction-related fees from construction costs. As a result, we were not able to compare construction costs for projects in New York City to construction costs for projects from the other 11 allocating agencies. Some allocating agencies—for example New York—did not include a line item for syndication expenses on their cost certifications. On cost certifications without a syndication line item, developers generally are expected to report those costs on the legal or partnership line item. As a result, we were unable to report information on syndication expenses incurred at the project level. Similarly, some allocating agencies’ cost certifications combined line- item costs that others did not. For example, 11 of the selected allocating agencies required developers to separately report general contractor overhead, profit, and general requirements, while 1 (New York City) generally required developers to combine the three costs under one line item. As a result, we had to create broad cost categories and were not able to assess costs at the line-item level. Ways in Which Standardized Data Can Facilitate Agencies’ Cost Assessments Few of the selected allocating agencies comprehensively or systematically evaluated data to determine the effect of their policies, including their cost-management approaches, on project development costs. Our analysis in the previous sections of this report highlighted ways in which allocating agencies can use and benefit from standardized data, including for project cost assessments. Individual allocating agencies could use data to more effectively identify cost drivers and trends over time. We have discussed how certain project characteristics were associated with higher and lower per-unit development costs. Our analysis illustrates how agency priorities and practices may influence costs, as shown in the following examples. Texas had the lowest median per-unit development costs among the selected agencies and tended to award credits to large garden-style apartments (low, clustered buildings). Georgia also had comparatively lower development costs. The agency funded the highest percentage of senior projects among the selected states (48 percent) and also funded the lowest percentage of urban projects (55 percent). Washington had among the lowest soft costs as a percentage of total development costs. Agency officials told us they used a consolidated application for awarding public funds—including LIHTCs, state tax credits, and HOME funds—that streamlines the application process for developers and reviewers and helps reduce soft costs. California had the highest land costs and soft costs among the selected agencies. The agency prioritized funding projects in job centers (urban areas) and completed projects used six funding sources in addition to tax credit equity, on average. Chicago had the highest construction costs as a percentage of development costs among the 12 selected agencies, and did not have a cap on development costs or eligible basis. Florida had the highest developer fees among the selected agencies. Our analysis showed the median developer fee in Florida was about $2.1 million for projects completed in 2011–2015; the next highest median fee was about $1.5 million (in New York and Texas). The agency’s 2017 QAP set developer fees generally at 16 percent of development costs, one of the highest rates among the selected agencies. In turn, agencies that have identified their cost drivers and trends could look to the experience of other agencies for examples of relevant ways to contain costs. For example, agencies with comparatively high costs— either overall or in particular cost categories—might benefit from considering the cost-management approaches of agencies with lower costs. Complete Data on Total Tax Credit Syndication Expenses Are Lacking Syndication expenses represent a significant cost of producing affordable housing with LIHTCs, but complete data on syndication partnerships generally were lacking. As shown in figure 13, syndication expenses include expenses at the upper-tier and lower-tier partnerships of a LIHTC deal. Investors pay for upper-tier expenses in the form of a syndication fee, similar to a load fee paid to a mutual fund manager. The fee covers expenses related to establishing, originating, underwriting, and closing on projects for the investment fund and is paid out of the equity investors contribute to the partnership. As a result, the fee facilitates equity investment in a fund’s LIHTC projects, while also reducing the amount of the equity investment available to each project. At the lower-tier partnership level, a project developer may pay a fee to the syndicator for project-specific legal and accounting expenses. The lower-tier syndication fee is typically less than the upper-tier fee. In a February 2017 report on the role of LIHTC syndicators, we cited an industry stakeholder’s estimate that upper-tier syndication fees for LIHTC funds were 2–5 percent of equity. According to a 2018 report by a national accounting firm, upper-tier syndication fees ranged from 5–8 percent of equity for multi-investor funds closed in recent years. For perspective, 2–8 percent of a $7.6 million investment (the estimated median amount for our 12-agency project sample) is $152,000–$608,000. The accounting firm report also noted that the market for acquiring projects and attracting investor capital is highly competitive. As a result, syndicators may reduce or defer their fees to attract projects and investor capital. IRS regulations require project developers to report syndication expenses on their final cost certifications. IRS officials told us that the regulations require the reporting of all syndication expenses, including upper-tier and lower-tier fees, on the cost certification. They said the regulation helps to ensure that allocating agencies have complete information to assess the financial feasibility of projects, as required under Section 42. Additionally, written guidance for IRS examiners states that syndication costs need to be accounted for, although they are not includable in eligible basis (allowable costs for calculating tax credit awards), to ensure they have not been accumulated with other costs for a line item on the certification. However, our 12 selected allocating agencies did not require developers to report upper-tier syndication expenses on final cost certifications and generally did not have data on these expenses. Allocating agency officials told us that developers generally report costs directly attributable to the project (including lower-tier syndication expenses) on the cost certifications. In explaining their practices, allocating agency officials said they did not consider upper-tier syndication expenses to be project costs because they are not directly incurred by the developer. Some of the officials noted that developers select investors based on the net equity (gross equity minus upper-tier expenses) or net price offered in exchange for the tax credits, and therefore may not be aware of the fees investors pay syndicators. Additionally, accounting firm officials said that if upper-tier expenses were included on the cost certification, they would not be able to access or verify documentation from the upper-tier partnership when auditing cost certifications because the upper- and lower-tier partnerships are separate legal entities. Outside of the cost-certification process, some of the selected allocating agencies said they receive investor letters or other documentation from syndicators that disclose upper-tier syndication expenses. These letters typically state the gross and net equity amounts attributable to each project, or a gross and net credit price offered in exchange for a developer’s credits. Some of the letters we reviewed also detailed the syndicator’s services and related expenses in addition to gross and net equity amounts or credit prices (for example, amounts for investor fees, organizational and offering expenses, acquisition expenses, and reserves and working capital). These examples suggest that information on upper- tier syndication expenses is available and allocable to specific projects. The gap between IRS’s expectations and allocating agencies’ practices developed, in part, because IRS has not clearly communicated expectations to allocating agencies about reporting of upper-tier syndication expenses. None of the documents IRS pointed to—the regulations, Technical Advice Memorandum, or Revenue Ruling previously cited—draw a clear distinction between upper- and lower-tier expenses, leaving the requirement open to interpretation. The documents also do not address issues that developers, allocating agencies, and auditing firms may have in obtaining and reviewing upper-tier fees. Federal internal control standards state that management should externally communicate—to contractors and regulators, among others— the necessary quality information to achieve the entity’s objectives. Without clear communication to allocating agencies on how to report syndication costs, IRS lacks assurance that the cost-certification requirement provides the level of financial transparency and accountability it expects. More complete collection of data on syndication expenses also would help answer key questions in our 2013 tax expenditures evaluation guide, which provides a framework for evaluating the effectiveness of tax expenditures. Examples of questions relevant to syndication expenses include the following: What are the costs of the resources used to generate the tax expenditure’s benefits? The costs of using syndicators cannot be known without disclosure of the upper-tier expenses for which LIHTC investors pay from their equity contributions. Who actually benefits from the tax expenditure? Disclosure of the fees syndicators receive would aid assessment of the benefits received by syndicators in relation to benefits received by other LIHTC program participants. The ability to answer these questions more fully would help Congress assess the costs, benefits, and efficiency of the LIHTC program relative to affordable housing programs that use delivery mechanisms other than tax expenditures. No Federal Agency Monitors and Assesses LIHTC Development Costs No federal agency monitors or assesses LIHTC development costs, which are key to evaluating the efficiency and effectiveness of the tax credit program. In a July 2015 report on federal oversight of LIHTC, we found that although IRS is the only federal agency responsible for overseeing the LIHTC program, it does not assess the performance of the program. IRS officials said the agency’s role is focused on ensuring taxpayer compliance and that the agency generally does not have the authority or funding to assess the performance of tax expenditures, including LIHTC. Unlike for the LIHTC program, Treasury collects and reports data on the New Markets Tax Credit program, for which Treasury has a more direct administrative role. The Community Development Financial Institutions Fund within Treasury uses its Awards Management Information System and its Community Investment Impact System to collect and report detailed information on New Markets Tax Credit projects, including certain cost and project characteristics data. Treasury produces annual research reports and periodic research briefs using these data. Consistent with a recommendation in our July 2015 report, IRS and Treasury officials said HUD may be better equipped to determine what data should be collected to assess LIHTC performance. Although HUD is the government’s lead housing agency, it currently plays a limited role in collecting and reporting data for the LIHTC program. Specifically, HUD collects and periodically reports information on LIHTC tenant characteristics as mandated by the Housing and Economic Recovery Act of 2008. In addition, since 1996, HUD voluntarily has collected LIHTC project-level data in its LIHTC database. While HUD may have the technological capacity to collect and maintain additional LIHTC data, absent additional authority, the agency does not have access to IRS taxpayer (developers and allocating agencies) data, including cost data. If HUD or another agency were given authority to collect and report on these data, it likely would need additional budgetary resources to carry out this function. Our tax expenditure evaluation guide outlines information Congress could consider when determining which federal agencies should manage the evaluation of tax expenditures. The guide cites statutory requirements that set the expectation that agencies should consider tax expenditures in measuring and communicating progress in achieving their missions and goals. It also states that for tax expenditures without logical connections to program agencies, Treasury may be the most appropriate agency to conduct an evaluation. Historically, IRS and Treasury (the agencies with the authority to oversee the LIHTC program) have devoted few resources to that task. And although HUD has a logical connection to LIHTC as the lead federal housing agency, it does not have oversight authority, access to key data, or existing resources to carry out additional data collection for and assessments of the LIHTC program. Without federal monitoring and assessment of LIHTC development costs, federal agencies and Congress do not have information to assess the tax credit’s efficiency and effectiveness. Conclusions The LIHTC program plays an important role in addressing the housing needs of low-income renters, but some LIHTC projects have been scrutinized for high or fraudulent development costs. Our analysis provides a broad perspective on development costs across a range of allocating agencies and illustrates the types of insights than can be gained from standardized data on project costs and characteristics. These include identification of cost drivers and trends that may help target cost-management efforts. However, our work also identified shortcomings in program data and administration that hamper oversight and are inconsistent with federal evaluation criteria and internal control standards. Although the LIHTC program represents the largest source of federal assistance for developing affordable housing, Congress has not specifically designated an agency to evaluate the program’s performance. Without a designated entity for collecting, maintaining, and assessing data on LIHTC project costs, federal agencies and Congress lack information needed to oversee billions of dollars in tax expenditures. The current IRS cost-certification requirement for LIHTC projects is limited to aggregated developer costs and does not directly address a known fraud risk. General contractor cost certifications required by some allocating agencies may help deter fraud by providing information that can be used to corroborate developer cost certifications. But because IRS does not require general contractor cost certifications for LIHTC projects, the LIHTC program may be vulnerable to fraud involving misrepresentation of costs. The lack of standards for collecting and maintaining data related to LIHTC project costs has resulted in inconsistent data quality and formats among allocating agencies. In the absence of a federal agency designated to collect data and assess program performance, greater standardization of cost data by allocating agencies would lay a foundation for deeper analysis of cost drivers and cost-management practices by allocating agencies and industry stakeholders. This analysis could be used to help increase the efficiency of the LIHTC program. IRS has not clearly communicated how allocating agencies should collect and review syndication expenses—particularly, upper-tier fees—to meet a regulatory requirement. As a result, information on a significant program cost is not transparent or available to conduct the types of financial assessments IRS expects allocating agencies to perform. Matter for Congressional Consideration Congress should consider designating an agency to regularly collect and maintain specified cost-related data from credit allocating agencies and periodically assess and report on LIHTC project development costs. (Matter for Congressional Consideration 1) Recommendations for Executive Action We are making a total of three recommendations to IRS: IRS’s Associate Chief Counsel, in consultation with Treasury’s Assistant Secretary for Tax Policy, should require general contractor cost certifications for LIHTC projects to verify consistency with the developer cost certification. (Recommendation 1) To help allocating agencies analyze development cost trends and drivers and make comparisons to other agencies, IRS's Commissioner of the Small Business/Self-Employed Division should encourage allocating agencies and other LIHTC stakeholders to collaborate on the development of more standardized cost data, considering information in this report about variation in data elements, definitions, and formats. (Recommendation 2) IRS’s Associate Chief Counsel, in consultation with Treasury’s Assistant Secretary for Tax Policy, should communicate to credit allocating agencies how to collect information on and review LIHTC syndication expenses, including upper-tier partnership expenses. (Recommendation 3) Agency and Third- Party Comments and Our Evaluation We provided a draft of this report to IRS, Treasury, and HUD for their review and comment. IRS provided written comments that are reprinted in appendix VII. Treasury and HUD did not provide comments. We also provided a draft to NCHSA for its review and comment. NCSHA provided written comments that are reprinted in appendix VIII. IRS disagreed with our recommendation to require general contractor cost certifications for LIHTC projects. IRS said it was not clear whether the recommendation would uncover and deter misrepresentation of contractor costs. We maintain that requiring general contractor cost certifications would help address this fraud risk by providing greater cost transparency to allocating agencies and auditors. Our report notes that a number of allocating agencies already have similar controls and that the Florida agency began requiring general contractor cost certifications in response to fraudulent contract-inflation schemes that were the subject of federal legal actions. Furthermore, NCSHA’s recommended practices advise allocating agencies to implement additional cost certification due diligence for all LIHTC projects. We believe that general contractor cost certifications should be required to help ensure the efficient and effective use of federal resources programwide. IRS disagreed with the recommendation in our draft report to collaborate with LIHTC stakeholders to develop a framework for the collection of cost- related data. The purpose of this recommendation was to promote creation of more standardized data to help allocating agencies analyze cost trends and drivers and make comparisons to other agencies. IRS said that in the absence of specific authorization, it collects data only to the extent necessary for tax administration, and that collecting LIHTC cost data is not necessary for that purpose. IRS added that without statutory authorization or a tax administration need, any data collection would be a misuse of IRS resources. In response, we modified the recommendation in our final report to give IRS greater flexibility in promoting standardization of LIHTC cost data in ways consistent with its authority. For example, IRS could encourage development of more standardized data in its communications with LIHTC allocating agencies and stakeholders at industry meetings and conferences. Our report recognizes that IRS has not had a role in assessing the performance of tax expenditures. For this reason, our report also states Congress should consider designating an agency to regularly collect and maintain specified cost-related data from allocating agencies and assess and report on LIHTC project-development costs. Finally, IRS disagreed with our recommendation to communicate to allocating agencies how to collect and review information on LIHTC syndication expenses, including upper-tier partnership expenses. IRS said that existing regulations require agencies to collect and evaluate all sources and uses of project funds and that this covers syndication expenses, including upper-tier partnership expenses. IRS said to the extent that we were recommending that it revise regulations, the agency did not necessarily have the authority to mandate how allocating agencies collect syndication expense data. IRS’s response suggests the reporting requirements are clear. However, as stated in our report, the 12 allocating agencies we reviewed and other LIHTC stakeholders did not share IRS’s understanding of the requirement. Consequently, the allocating agencies did not require developers to report upper-tier syndication expenses and generally did not have data on the expenses. In its comments on our report, NCSHA also expressed surprise at IRS’s explanation (see discussion below and app. VII). Finally, our report does not state that IRS should revise its regulations. Rather, it recommends that IRS communicate its requirement to allocating agencies. The wording of our recommendation provides IRS the flexibility to communicate the requirement in whatever way it deems appropriate. As a result, we made no changes to the recommendation. In its comments, NCSHA expressed concerns about our recommendation and matter for congressional consideration about collecting and analyzing LIHTC cost data. NCSHA questioned the cost-effectiveness of requiring consistent data across states and did not believe that cross-state comparisons were critical for evaluating LIHTC. For example, NCSHA said the utility of comparing Hawaii costs to Arkansas costs was not clear. NCSHA also noted LIHTC was designed to give allocating agencies flexibility, including in program design and data collection. We maintain consistent data are important for program management and oversight. While cost drivers in states differ, our report notes that at least one allocating agency has funded a study to compare development costs with neighboring states. While we understand the LIHTC program gives states flexibilities, a more standardized approach to data collection would not restrict allocating agency funding decisions or prevent agencies from collecting data they consider important. Furthermore, consistent data collection would facilitate state and federal evaluations of the cost- effectiveness of a multibillion dollar tax expenditure. NCSHA also expressed concern that Congress might require the data collection but not appropriate funds to implement the mandate. Our report acknowledges that if Congress were to grant an agency the authority to collect and report on LIHTC cost data, that agency likely would need additional budgetary resources to carry out this function. Regarding our recommendation on general contractor cost certifications, NCSHA noted that more allocating agencies were likely to adopt NCSHA’s recommended practices and require or encourage such certifications. However, allocating agencies voluntarily adopt recommended practices, and some agencies may view a general contractor cost certification as unnecessary. NCSHA added that instances of fraud were rare in the 30-year history of LIHTC, and affected agencies had responded in each known instance. We noted in our report that under the existing federal cost certification requirement—which stops at the developer level—the vulnerability of the LIHTC program to misrepresentation of general contractor costs is heightened. And while known instances of fraud schemes (such as the Florida examples cited in our report) may be limited, the true extent of fraud in the program is unknown. Federal internal control standards state that management should consider the potential for fraud when identifying, analyzing, and responding to risks. Requiring general contractor cost certifications for all LIHTC projects could help address this known fraud risk and further strengthen the integrity of the program. Regarding our recommendation on syndication expenses, NCSHA was surprised IRS officials told us LIHTC regulations require reporting of all syndication expenses (including upper-tier expenses) on the project cost certification. NCSHA said it long understood that the cost certification must include only costs paid by the project partnership for the individual property (the developer) and that IRS never communicated otherwise. NCSHA also identified some potential difficulties with collecting and reporting information on upper-tier syndication fees. While our report discusses some similar concerns, it also provides examples of at least two allocating agencies that collect such information. NCSHA’s response further supports our finding of a gap between IRS expectations and allocating agency practices for reporting syndication expenses and underscores the need for IRS to more clearly communicate its expectations on how to collect and review this information. Finally, NCSHA said findings from its recently commissioned study of LIHTC development costs, which had not been released as of August 2018, were generally consistent with cost analyses in our report. NCSHA said its study and other information suggest LIHTC development costs generally were consistent with overall apartment development costs and grew at a similar or slower rate. We believe broad comparisons between LIHTC and non-LIHTC development costs should be viewed with caution. As our report notes, numerous limitations in available LIHTC cost data (among other factors) make it difficult to produce methodologically sound comparisons. If implemented, our recommendations to improve collection and analysis of LIHTC data could help overcome some of these difficulties. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Treasury, the Secretary of Housing and Urban Development, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or garciadiazd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IX. Appendix I: Objectives, Scope, and Methodology The objectives of this report were to analyze (1) development costs for Low-Income Housing Tax Credit (LIHTC) projects completed in 2011– 2015 in selected locations and factors affecting these costs, (2) steps allocating agencies have taken to oversee LIHTC development costs, and (3) factors limiting assessment of LIHTC development costs. We selected 12 credit allocating agencies (representing 10 states and 2 cities) as the focus for key parts of our analysis discussed in more detail later in this appendix: Arizona Department of Housing California Tax Credit Allocation Committee Chicago Department of Planning and Development Florida Housing Finance Corporation Georgia Department of Community Affairs New York City Department of Housing Preservation and Development New York State Division of Housing and Community Renewal Ohio Housing Finance Agency Pennsylvania Housing Finance Agency Texas Department of Housing and Community Affairs Washington State Housing Finance Commission To select these agencies, we ranked all states in order of their credit ceiling amount for 2015 and selected the two highest-ranking states in each of five geographic regions (West, Southwest, Midwest, Southeast, and Northeast). We then selected for review the 12 allocating agencies within those 10 states that administered 9 percent LIHTCs. These allocating agencies accounted for 50 percent of the total 9 percent credit ceiling amount in 2015. To obtain general information for all of our objectives, we interviewed officials from the 12 selected allocating agencies, the Department of Housing and Urban Development (HUD), Department of the Treasury (Treasury), and Internal Revenue Service (IRS). We also interviewed representatives from 10 groups representing allocating agencies, developers, investors, syndicators, and other LIHTC interests, including Affordable Housing Investors Council; Affordable Housing Tax Credit Coalition; Recap Real Estate Advisors; Housing Partnership Network; Enterprise Community Partners; Mortgage Bankers Association; National Association of Home Builders; National Association of State and Local Equity Funds; National Council of State Housing Agencies (NCSHA); and Stewards of Affordable Housing for the Future. Additionally, we interviewed representatives of two national accounting firms— CohnReznick LLP and Novogradac & Company LLP—that have LIHTC practices and have conducted research on the LIHTC program. Data Used in Our Analysis of Costs and Characteristics To analyze the development costs of LIHTC projects completed in 2011– 2015 in selected locations and characteristics associated with project costs, we created and analyzed a database of costs and characteristics for the 1,849 LIHTC projects that submitted final cost certifications to the 12 selected allocating agencies in that period and for which the cost certification was available. Collecting LIHTC Project Data We first requested relevant documentation and data from the selected allocating agencies. Specifically, we requested the final cost certification for all projects that received 9 percent LIHTCs and were submitted in 2011–2015. We also included projects for which the selected allocating agencies initially reserved a tax credit allocation but exchanged the allocation for American Recovery and Reinvestment Act of 2009 funds. In addition to cost certifications, we also requested documentation and data that described project characteristics associated with project costs. We determined relevant characteristics to collect through a review of existing housing-agency-sponsored literature on LIHTC project costs. We identified existing literature through a literature search, and we confirmed the completeness of the literature with selected industry groups. The project characteristics we collected from the selected allocating agencies included the following: Address (street, city, state, and zip code) Construction type (new construction or rehabilitation) Income limits for low-income units Number of buildings (residential and non-residential) Number of units (low-income, market-rate, and employee-occupied) Square footage (gross and residential) Structural features (the presence of an elevator, green building certifications, and parking structures) Net tax credit price Tenant type (senior or nonsenior) Unit sizes (number of bedrooms) Year of completion (year final cost certification signed) We used manual data entry and a statistical program to input the project costs and characteristics into individual databases we created for each selected allocating agency. We verified the accuracy of the manual data entries by having a second analyst review the entries of the first analyst. Additionally, a second analyst reviewed the statistical programs we created and a sample of the databases they created to verify their accuracy. After compiling the 12 databases, we compared our list of projects against HUD’s LIHTC database to verify the completeness of our sample. For projects that we determined had been omitted, we requested their documentation and data from the relevant allocating agency, which we then manually entered into our databases and verified in the manner previously described. Consolidating LIHTC Project Data To perform analyses across all sampled projects, we consolidated the 12 allocating agency databases into one sample-level database. We first interviewed each of the selected allocating agencies to define data elements—including how to treat missing data—and determine the comparability of the data they provided. We also requested additional documentation and data, such as missing project addresses and data elements we identified after our initial data request. Additionally, we interviewed a national accounting firm that specializes in LIHTC cost certifications to further define cost data and learn more about their comparability across allocating agencies. We then categorized project costs into aggregated categories. Line items in cost certifications were not comparable across all selected allocating agencies due to differences in how data were reported. For example, market study costs were listed separately on some cost certifications but aggregated with appraisal costs on others. To improve the comparability of cost data across allocating agencies, we developed and implemented a plan to categorize and consolidate cost data using a statistical program. We developed the plan by reviewing the overlap between the line-item costs we collected. We also reviewed a study of multiple allocating agencies that was conducted by an accounting firm specializing in LIHTC cost certifications and which used a similar methodology to consolidate costs. Based on our plan, we categorized costs into three hard-cost and four soft-cost categories: Construction: Costs related to the direct physical development of the project site and structures. These include change orders; construction trade material and labor (such as electrical, masonry, or roofing); contingencies; demolition; environmental remediation; furniture, fixtures, and equipment; landscaping and fencing; off- site and on-site improvements; other property assets (such as maintenance, office, or playground equipment); prevailing wages; site security (if listed separately from contractor fees); tenant relocation; and utilities during construction. Existing structures: The purchased or appraised value of acquired structures. Land: The purchased or appraised value of acquired or leased land. Architect and engineer fees: Fees for architectural design and supervision and engineer services. Contractor fees: Contractor general requirements, overhead, and profit. Developer fees: Developer overhead and profit. Other soft costs: Costs related to financing, tax credit partnership and syndication, predevelopment, professional services, and other indirect construction activities, as shown in the following examples. These include accounting; agency fees (such as application, reservation, allocation, extension, compliance monitoring, and waivers fees); appraisals; broker fees and closing costs; capital needs assessments; certifications; construction-management fees; project supervision or monitoring; consultant fees; credit reports; environmental reports (such as asbestos and lead-paint tests); green building and energy efficiency design services; impact and utility connection fees; inspections; insurance (such as builders risk, general liability, hazard, and title insurance); surveys; legal fees; loan fees and interest (such as for predevelopment loans, construction loans, bridge loans, and permanent loans); market studies; payment or performance bonds; permits and other local fees; real estate taxes (during construction); soil borings and tests; and title searches and recording. We also collected each project’s total development cost and eligible basis from the cost certification. To isolate development costs, we subtracted from each project’s total development cost all costs associated with prefunded reserves and postconstruction activities, such as marketing and rent-up period operating expenses. We also developed and implemented a plan to consolidate project characteristics data into the sample-level database using a statistical program. We interviewed officials and reviewed documentation from selected allocating agencies about data definitions to determine the comparability of the characteristics data we collected. We then recoded comparable data elements using a standard coding system across all 12 allocating agencies. We conducted verification checks on the programs we created and the final database. To assess the reliability of the project data, we tested each data field for missing values, obvious errors, and outliers—for example, whether per- unit costs were more than two standard deviations from an allocating agency’s average. We communicated some outliers and inconsistencies to relevant allocating agency officials and made corrections to the database as necessary. We concluded that the data were sufficiently reliable for purposes of comparing LIHTC development costs within and across allocating agencies and for examining development cost drivers and trends. As an additional test, we compared summary statistics from applicable data elements in our database to comparable data elements in HUD’s LIHTC database. We found that our data elements did not differ in significant ways from HUD’s. Incorporating Location Data from Secondary Sources We then merged several additional location characteristics into our database from federal and public statistical sources. We first validated project addresses and then used them to determine the census tract for each project. We then used census tracts to incorporate data from the American Community Survey, including census tract size and population (which we used to calculate population density), median home value, poverty rate, and unemployment rate. Using the census tract, we also identified the Rural-Urban Commuting Area codes classification for each project, which we recoded to categorize each project as rural, suburban, or urban. We also identified whether each project was located in a qualified census tract or difficult development area using the 2017 HUD lists. Lastly, we used geographic information system software and the Department of Transportation’s Fixed-Guideway Transit Network database to identify the distance from each project to the nearest transit station (train and bus rapid transit stations). Before conducting our analyses, we prepared data analysis plans and interviewed selected representatives from industry groups and researchers to inform our efforts. We also clarified data interpretations and limitations with officials from the selected allocating agencies on an as-needed basis. Costs and Characteristics of LIHTC Projects To describe the costs and characteristics of LIHTC projects, we calculated and compared summary statistics for relevant database elements. To account for inflation, we converted all costs to 2015 dollars using the calendar-year, chain-weighted Gross Domestic Product price index. We also normalized costs by dividing the total development cost by the number of units. We then calculated and compared summary statistics for key categories, such as the number and median per-unit cost of new construction projects, and subcategories, such as the number and median per-unit cost of new construction projects in urban areas. We also repeated these analyses for each selected allocating agency. To compare the cost of Chicago’s and New York City’s projects to other urban locations, we calculated and compared their median per-unit costs to costs in five other cities within our 12-agency sample that had comparable populations and densities. Using 2010 Census data, we selected the five densest cities (people per square mile) with populations of 300,000 or more, population densities of 5,000 or more people per square mile, and 10 or more new construction projects completed in 2010–2015. They were Los Angeles, Miami, Philadelphia, San Francisco, and Seattle. To identify all projects within the five selected cities, we matched the three-digit zip code prefixes associated with their U.S Postal Service area (known as a sectional center facility) to the zip codes for sampled projects. To determine the composition of project costs in terms of hard and soft costs, we compared the sum of all hard costs and the sum of all soft costs to the sum of all total development costs by construction type. Hard costs included existing structures, land, and construction costs; soft costs included architect and engineer fees, contractor fees, developer fees, and other costs. We also compared the cost categories (such as construction costs) using the same approach as for hard and soft costs. We then repeated these steps for each selected allocating agency. We also reviewed how LIHTC equity investments differed by construction type. We first calculated the equity investment for each project by multiplying the LIHTC allocation by the net credit price (both adjusted to 2015 dollars). We then calculated and compared the median per-unit equity investment and the percentage of the median per-unit total development cost that it comprised for new construction and rehabilitation projects. To determine how total development costs changed over time, we calculated and compared the median per-unit cost for each year by construction type. We then repeated these steps for each allocating agency to determine how their costs changed over time. We also repeated the sample-level analysis over time excluding California’s projects from the new construction pool and New York City’s projects from the rehabilitation pool because, in both cases, their costs were among the highest, changed sharply in some years, and represented roughly one-fifth of all new construction and rehabilitation projects, respectively. To determine how LIHTC construction costs changed over time relative to a federal index of construction costs, we calculated and compared the annual rates of change in the median per-unit cost of construction and contractor fees for sampled new construction projects to the rates of change in the annual averages for the Bureau of Labor Statistics’ Producer Price Index by Commodity for Final Demand: Construction. This index tracks monthly price changes for construction materials, labor, equipment, and contractor fees. To account for the delay between when construction costs were incurred and projects completed, we compared the annual rates of change for the LIHTC projects to the annual rates of change in the average index value from the prior year. We also used the prior-year rate of change to generate a projection of LIHTC construction costs to determine how the sample trend differed from the index trend. For example, we calculated the projected cost in 2012 by inflating the actual cost in 2011 by the change in the average index value in 2010– 2011. To determine the association between the project characteristics we collected and per-unit development cost, we developed a statistical model and used ordinary least squares regression to estimate the controlled effect of specified characteristics on per-unit cost. For more detail on our statistical model and results, see appendix II. To further describe how project characteristics may have influenced costs, we calculated and compared summary statistics for the model characteristics among new construction projects below the 25th percentile or above the 75th percentile for per-unit cost within each allocating agency. Steps Taken to Assess Allocating Agencies’ Oversight of LIHTC Development Costs To analyze steps allocating agencies have taken to oversee LIHTC development costs, we reviewed the Qualified Allocation Plans (QAP) and related documents (for example, policy manuals) for all 57 allocating agencies as of 2017. These agencies included all 50 states, the District of Columbia, the 4 U.S. territories that received a LIHTC allocation in 2017 (Guam, Northern Mariana Islands, Puerto Rico, and U.S. Virgin Islands), and the Cities of Chicago and New York. We conducted a structured analysis of the QAPs and related documents to gather information about agencies’ policies and practices for managing and verifying project-development costs. We defined “cost management” as practices allocating agencies used to contain or limit development costs and fees, such as cost limits, credit allocation limits, fee limits, and cost- based scoring criteria. We defined “cost verification” as practices the agencies used to confirm the accuracy of project costs following construction—that is, whether the amount paid equaled the amount billed. To obtain supplementary information on allocating agency approaches to cost management, we interviewed officials and reviewed additional documentation from the 12 selected allocating agencies, identified previously. Through this work, we identified a number of other steps those agencies took to limit LIHTC development costs. While the results of our supplementary work cannot be generalized to all allocating agencies, they provide additional insight into the cost-management approaches and cost-verification requirements of a diverse group of allocating agencies. For further context on cost-management approaches, we reviewed GAO and industry reports that analyzed allocating agency QAPs from prior years. We also interviewed federal officials to obtain information about relevant LIHTC requirements and cost-management practices used in other federal programs that support development of affordable multifamily housing. Specifically, we spoke with IRS and Treasury officials about LIHTC cost-verification requirements and the approaches of allocating agencies to cost management. In addition, we interviewed HUD officials to identify cost-verification practices used in the HOME Investment Partnerships Program and the Federal Housing Administration’s Multifamily Mortgage Insurance programs. To obtain additional information about allocating agency practices and the cost-certification process, we interviewed representatives of NCSHA, CohnReznick LLP, and Novogradac & Company LLP. Steps Taken to Evaluate Factors Limiting Assessment of LIHTC Development Costs To analyze factors limiting assessment of LIHTC development costs, we assessed the data we collected from the 12 allocating agencies. We identified and documented the consistency in cost-related variables agencies collected in several key documents and data sources, and how they defined the variables. We documented the formats in which agencies provided and maintained the data we requested and steps we took to standardize and combine data. We compared the variables the agencies collected against federal tax credit allocation priorities outlined in Section 42 of the Internal Revenue Code (Section 42), as well as certain allocating agency priorities. In addition, we reviewed an off-the- shelf software package for cost-estimation to determine what project characteristics were required to calculate estimates with the software, and evaluated the extent to which the selected agencies collected these characteristics. We also reviewed Section 42 and related regulations to ascertain requirements for reporting syndication expenses to allocating agencies and IRS, and interviewed IRS and Treasury officials about these requirements. We interviewed the selected allocating agencies about their practices for collecting and reviewing syndication expense information. We also interviewed CohnReznick LLP and Novogradac & Company LLP about the different fees syndicators charge to investors and developers, and the extent to which these fees are reported to allocating agencies. Finally, we reviewed our prior work on federal oversight of the LIHTC and other tax credit programs. We conducted this performance audit from May 2015 to September 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Description of Our Statistical Model to Examine Factors Associated with Development Costs for Low-Income Housing Tax Credit Projects This appendix provides an overview of our statistical analysis of factors associated with the cost of producing affordable rental housing supported by the Low-Income Housing Tax Credit (LIHTC). We developed a regression model that explains the costs based on a number of project characteristics and other factors. As described in appendix I, we developed a data set based primarily on information from 12 selected allocating agencies. The data set contains detailed information on 1,849 LIHTC projects with final cost certifications signed in 2011–2015 and provides broad geographic coverage, including urban, suburban, and rural locations. whether a project was located in a qualified census tract or a difficult development area. We augmented these data with information from the American Community Survey and from USDA to enable us to control for certain neighborhood characteristics that may be associated with the cost of developing and constructing LIHTC projects. Key Characteristics of the Projects Table 4 below provides an overview of project costs and some key attributes of projects in our sample and highlights the variation across the allocating agencies. The average total cost per unit in our data set is about $220,000 (in 2015 dollars). The average total cost per unit was greater than $300,000 in California and Chicago and less than $150,000 in Georgia and Texas. Construction costs were greater than or approaching $200,000 in Chicago and New York City and less than $100,000 in Georgia and Texas. Project scale varied across the agencies, reflecting differences in built environments, property costs, and other factors and averaged 66 units and 7.5 buildings. The cost of land and existing structures can be a large component of project development costs. Land costs can scale with project size (an apartment complex of 12 buildings could require twice as much land as a complex of 6 buildings) as well as with underlying market land values. The median land value across all projects was about $400,000, and was more than $1,000,000 in California and Florida. But the median land cost in New York City was about $1, suggesting that land and structures were donated. Given the market values of New York City real estate, total development costs for some New York City projects are likely to be understated when compared to projects in other jurisdictions. Variable Definitions Variables Describing Project Characteristics The data set includes detailed information on program characteristics (discussed previously) that we used to define explanatory variables. We included the size of projects as defined by total units and placed them in four size categories (fewer than 37 units, 37–50 units, 51–100 units, and more than 100 units). To develop a project-type categorization, we incorporated information on the number of residential buildings. Projects can come in many combinations of building count and building size (number of units). For instance, a 60-unit project could be a single 60-unit building, 10 6-unit buildings, or 30 2-unit buildings. We distinguished projects in which the average building size had at least 60 units (“larger buildings” category) and projects with at least 20 buildings (“many buildings” category). We placed all remaining projects in a large residual category. This category is somewhat independent of size and primarily is meant to distinguish among types of projects that might require specialized construction or project-management skills. We also created variables to provide information on the distribution of units by number of bedrooms within each project. Bigger units, those with more bedrooms, are more costly to build. We created three unit size categories: 0-1 bedroom, 2 bedrooms, and 3 or more bedrooms. We defined the values as shares of total units in the category. For example, if a given project had 80 units, 20 of which had 1 bedroom, 40 of which had 2 bedrooms, and 20 of which had 3 bedrooms, the values for these variables would be 0.25, 0.5, and 0.25 respectively. The values sum to 1 across the categories. We used binary variables to indicate if projects were new construction or rehabilitation. New construction is generally thought to be more expensive than rehabilitation on average, given site work and possible demolition requirements. We also developed variables to indicate if a project was targeted to seniors and if it served low-income tenants exclusively or a mix of low-income and other tenants. We used two variables (yes or no binaries) to indicate if a project was in a qualified census tract or difficult development area. Within the LIHTC program, the size of the credit awarded for a given project may be increased if the project is located in such areas. We also used information on other project characteristics that would affect costs, which we obtained for some, but not all, allocating agencies. For instance, for two agencies we could indicate that the project included parking structures (as opposed to a surface parking lot or stand-alone garage or carports), and for three agencies, that projects were built according to Leadership in Energy and Environmental Design (LEED) standards. Variables Describing Project Financial Support and Developer Type Variables from Other Sources to Control for Neighborhood and Geography A broad set of factors related to local conditions, as well as conditions such as whether project locations are rural or urban, likely influence the costs of developing and building projects. Thus, we also used codes developed by USDA (the Rural-Urban Commuting Area codes) to place each project into rural, suburban, or urban categories. because a given dollar amount of rent represents access to different housing quality in different places. That is, neighborhoods in which rents are high or low may share common characteristics across the country. We also used a series of allocating agency dummy variables and a series of project year dummy variables to control for otherwise unmeasured factors that may be common across projects or conditions in each agency jurisdiction or year, respectively. Information on Omitted Categories for Categorical Variables Many of the explanatory variables in the model are categorical variables, and thus the coefficient estimates presented in the tables in this appendix need to be interpreted in terms of differences from an omitted category. The omitted categories are for project scale, projects with fewer than 37 units; for project type, all projects in which there are fewer than 60 units per building and fewer than 20 residential buildings; for unit size, the 2-bedroom group; for age of housing stock, median year built between 1945 and 1994; for contract rent, neighborhoods in which the median contract rent is between the 25th percentile and median values of the state-wide contract rent; and for geographic area, suburban. Some allocating agencies did not have complete information about whether other program funding, such as funding from Rural Development or ARRA programs, were used for projects. Conceptually, these variables are yes or no binaries. One approach is to add an “unknown” category in addition to the usual yes or no binary. That is, the categorization becomes “known yes,” “known no,” and “unknown.” An alternative approach is to treat missing information as the absence of the characteristic of interest. Using the three-category approach generally yielded virtually identical results to the alternative in which “missing” information was treated as the absence of the characteristic. In general, we used a traditional binary structure. In one case, we kept the three-category structure. Specifically, we created a measure across agencies as to whether projects were targeted solely to low-income tenants or to a mix of low-income and other tenants. In many cases and across many agencies, we were not able to reliably make this determination using information in the data set. For estimation purposes, we included the unknown and known low-income category binary variables and omitted the known mixed-income category. The interpretation of the known low-income category is still the difference from the known mixed-income category. Other variables are binary, indicating the presence of the characteristic (such as if the project used a Rural Development loan or not, or was in a qualified census tract or not). Regression Strategy Following Cummings and DiPasquale, we estimated a regression model to explain total development costs per unit—and alternatively, measures of construction costs and soft costs separately—as depending on these project and neighborhood characteristics. We developed a base case model including the variables discussed previously and estimated this model using all 1,849 observations. The pooled sample, because it provides a broad range of conditions and policy responses, can permit a similarly broad view of the influences on LIHTC project costs. At the same time, we wanted to have some idea about how sensitive broad, overall results were to the influence of conditions and policy responses of particular jurisdictions. (We would expect housing market conditions and housing policy responses to differ across agencies.) Thus, we also present the same model estimated on three different subsamples in which the projects of particular allocating agencies were excluded. The pooled sample and subsample results are shown in table 5 later in this appendix. Specifically, we present results on samples excluding projects in California, New York City, and Texas in turn. California had the highest average total cost, highest (observed) land costs, and biggest program in terms of allocation of tax credits and units placed in service. New York City is a completely urban jurisdiction. About 75 percent of its projects were rehabilitation projects (compared to about one-third for the entire sample). More than half of its projects were in neighborhoods in which the median year housing stock was built was 1945 or before (compared to about 15 percent for the entire sample). Texas had the lowest total cost and lowest construction costs and soft costs per unit, with many large, multibuilding projects that may be impractical in some other contexts. It was second to California in allocation of tax credits and units built. Housing conditions in the three jurisdictions and policy options favored by these jurisdictions may not represent conditions and policy options easily available or desirable in other jurisdictions. We also present estimates explaining construction costs per unit and soft costs per unit as alternatives to total costs. The construction cost measure includes costs for site and structure work and fees paid to the building contractor. We defined a broad soft cost measure to include predevelopment costs, financing costs, legal fees, architect and engineer fees, developer fees, and project-level partnership and syndication fees. Some factors may be more associated with the construction-cost component and less associated with the soft cost project-development component, or vice versa. These results are shown in table 6. Sensitivity Analysis We also present results using the pooled sample set for three variations of the base specification. The first variation omitted the property value variable. Property values vary within states and metropolitan areas, as well as across the states. We examined the extent the presence of this control affected the influence of other factors. The second variation omitted variables related to neighborhood characteristics. The third variation omitted the variables related to other types of housing support (for example, HOME funds). These results are shown in table 7. received final cost certifications in 2011 and 2012. In table 9 we present results concerning possible cost-related features (parking structures, LEED certification, and developer type) for specific agencies and a subset of projects. We addressed whether our estimates were sensitive to the possibility that observed values for total cost might be artificially low when land or structures were acquired at very low or zero cost. We restricted projects to those in which land and structure costs accounted for at least 1 percent of total development costs and estimated our model on this subsample using both total costs and construction costs as dependent variables. We present our results in table 10. We examined whether the results were sensitive to the form in which some credits were granted in New York City. That is, credits awarded in New York City to many single-building projects appeared to be part of larger neighborhood clusters under common development. In an alternative version, we aggregate project-level information to the level of multibuilding project clusters. We present the results in table 11. Finally, we looked at whether proximity to transit affected project costs. Some allocating agencies may offer incentives for transit-oriented developments—or projects within certain proximity to public transit. These areas may have higher land and construction costs due to higher density and demand within urban environments. Using projects within 2 miles of a transit station and various distance ranges, we estimated the association with per-unit total and construction costs. We present the results in table 12. Regression Specification We used ordinary least squares estimation with heteroscedasticity consistent standard errors. This model allowed us to make statements concerning the association of explanatory factors on project costs, given that other explanatory factors were held constant. As is the case in such models, we generally only can discuss associations between explanatory factors and the cost measure to be explained, and not causality. For example, the use of other sources of government funding may have directly increased construction costs, as fund usage can trigger federal prevailing wage requirements. On the other hand, these other funding sources may have been used in addition to LIHTC equity to fill funding gaps for projects with particularly high costs. Additionally, econometric estimates can be sensitive to model specification, variable definitions, and the omission of variables (for example, due to unavailable data) relevant to the outcome of interest. Because the data used to estimate the model include only LIHTC projects that were placed in service, we cannot make statements about how the costs of developing these projects may compare to other potential LIHTC projects or to projects developed and financed by the private sector. It is probably true that allocating agencies could have selected lower-cost (or higher-cost) projects compared to those actually selected, but whether or not this counterfactual housing would have better served the low-income population is a different question. Estimation Results Our results are presented in tables 5 through 12. Our estimates include allocating agency and project year dummy variables, which are not presented in the tables. The allocating agency dummy variables are agency-specific intercept shifts, given the estimation of common slopes, and largely pick up unexplained deviations from the pooled average costs. The project year dummy variables were estimated to be small and only rarely statistically significant. We also estimated a version in which each agency and project year combination had its own intercept shift, but these results were quite similar. The dependent variable in most cases is total development cost per unit, adjusted for inflation. Base Case Results and Sensitivity to Included Allocating Agencies level. Without California in the sample, per-units costs in the “many buildings” projects indicator were estimated to be more than $10,000 higher than more typical projects, controlling for other characteristics. This amount was estimated to be much smaller and statistically insignificant with California observations. The share of 3-bedroom units was associated with higher cost per unit and was not particularly sensitive to the sample, although the degree to which a higher share of smaller units led to reduced cost per unit was less clear. Costs to develop senior projects were modestly lower, but estimates and statistical significance were sensitive to the agencies included. Projects targeted exclusively to low-income households (most projects) were estimated to be more costly to develop than mixed-income projects. These results were quite sensitive to the presence of projects approved by the New York City allocating agency. More than 40 percent of the mixed-income projects in the entire sample were in New York City. Many of New York City’s mixed-income projects had donated land and might not be comparable from a cost perspective to mixed-income projects in other locations. When we excluded New York City projects, our estimates showed no statistically significant difference in per unit costs for low- and mixed-income projects. Notably, Rural Development loans were associated with sizeable effects on costs (costs were lower). This may be partly due to the types of projects supported by Rural Development loans, such as farm labor housing (which may lack some amenities that can increase costs) and program limits on costs per unit. Projects supported by HOME and CDBG funds were estimated to be more costly to develop, although these differences were not generally statistically significant. The effect of HOPE VI financial support was estimated to be large and statistically significant, but only about 1 percent of projects in the sample were supported with this program. The projects that received financial support from this source might be idiosyncratic, or could include other unobserved characteristics that influence costs. For example, tenant relocation requirements for HOPE VI projects may have contributed to the higher per-unit costs. $15,000. Projects in neighborhoods with low rents (relative to the state distribution) were estimated to be less costly, typically in the range of $20,000–$30,000 per unit. Costs in neighborhoods with higher rents were estimated to be modestly higher, but rarely significant. Older neighborhoods were associated with higher costs per unit, while newer neighborhoods were associated with lower costs per unit, as compared to projects in neighborhoods in which the median year built was between 1945 and 1994 (and controlling for other characteristics). In the pooled sample, estimated magnitudes were about $18,000 higher in older neighborhoods and about $17,000 lower in newer neighborhoods. Examining Construction and Soft Cost Components Table 6 shows that many of the same factors affected total costs, construction costs, and soft costs similarly. For instance, all costs scaled with project size and new construction, and many of the neighborhood effects remained significant. A higher share of 3-bedroom units was associated with higher costs in all cost categories. “Larger buildings” projects had higher total costs and construction costs, but modestly negative and insignificant soft costs. The latter result is consistent with the idea that soft costs scale with the number of units, but not with the size or number of buildings in a project. Projects with Rural Development loans were associated with lower construction and soft costs. For construction costs, the result is consistent with the loans being able to be used for projects characterized by lower- than-average costs of construction. Soft costs may be affected more directly to the extent that Rural Development loans provide a key source of funding that may reduce the difficulty of other project financing efforts. The HOME indicator was associated with modestly significant higher construction and soft costs. Slightly more than one-third of projects across all allocating agencies received HOME funds. Finally, the lower costs associated with senior projects were more statistically significant for soft costs than total costs or construction costs. Sensitivity to Specification In table 7, we present model variations that exclude, in turn, particular portions of the base case explanation. Other remaining factors, including those associated with the LIHTC program, may be sensitive to the omitted factors. For instance, the estimated effect of a Rural Development loan may be sensitive to the presence of a rural control variable, or the estimated effect of a location in a qualified census tract may be sensitive to other indicators of neighborhood characteristics. Because the value of land influences the total cost of housing development, we first excluded the home value variable (a measure of variation in property values within and across allocating agency jurisdictions). Estimates of the effect of other neighborhood measures, such as housing stock age and rent quartiles, changed in the absence of the property value measure. The age of housing stock variables were highly significant with and without the inclusion of the property value measure. In the model with the property value measure included, the difference between the estimated cost in an older neighborhood and the estimated cost in a newer neighborhood is about $35,000. That is, the estimated cost in an older neighborhood was about $18,000 more and the estimated cost in a newer neighborhood was about $17,000 less than the estimated cost in in a neighborhood in which the median year built was between 1945 and 1994. In the model with the property value measure excluded, this difference increased to about $50,000, which may reflect the underlying correlation of age of neighborhood and property value that we observe in our data set. For projects in locations in the upper half of the state contract rent distribution, the estimate became much larger and statistically significant at the 1 percent level. poverty rate measure became much smaller, decreasing from about 390 to about 125, and insignificant. In the sample, the 25th percentile poverty rate was about 14 percent, and the 75th percentile value about 37 percent. In the base case, an increase of 23 percentage points represented an increase in total costs per unit of about $9,000, but in the specification without the measure of property value the estimate was about $2,900 (controlling for other characteristics in both specifications). The overall fit, expressed as adjusted R-squared, was reduced from 0.648 to 0.618 in the absence of the property value measure. Compared to the base case, most results were not particularly sensitive to the absence of the neighborhood variables (housing stock age, rent quartiles, and poverty rate). However, the qualified census tract variable became larger (from about $7,000 to about $18,000) and statistically significant in the absence of the neighborhood variables. The property value effect also became somewhat larger, suggesting that costs increased by about $41,000 per unit, compared to $33,000 in the base case, given a change in property value from the first to the third quartile and controlling for other characteristics. The overall fit worsened from 0.648 to 0.627. The omission of the other housing program support variables had very little effect, which is not that surprising given the lack of large effects other than the presence of Rural Development loans. The overall fit, expressed as adjusted R-squared, was reduced from 0.648 to 0.641. Examining Effects of the American Recovery and Reinvestment Act of 2009 Activities funded through nonrefundable tax credits require the entities claiming the credit to have (or expect to have) sufficient federal income tax liability to make the credit desirable. During the 2007–2009 recession, some investors in tax credit-related activities saw reductions in their tax liability. ARRA created the possibility that low-income housing projects could be supported by federal grants that allocating agencies would allocate in much the same manner as they allocated tax credits. Of all LIHTC projects receiving some ARRA support, more than 90 percent had final costs certified in 2011 and 2012. Thus, we examined the effects of ARRA, expressed as a binary indicator of participation, using the same model but with projects restricted to those that were certified in 2011 and 2012. That is, we believe this was the time period for which ARRA was likely to be most relevant and thus any effects likely to be most pronounced. About one-half of the projects in our data for project years 2011 and 2012 received some ARRA support. We present results for total costs, construction costs, and soft costs separately, the motivation being that grant funding may reduce the costs of project finance and syndication relative to the traditional credit-based context (see table 8). Construction costs might be expected to be less directly affected by a change in the project finance regime. In general, the overall results are similar to those presented in table 6. The ARRA indicator is negative and significant in the total and soft cost versions, and negative but insignificant in the construction cost context. The ARRA coefficient was estimated to reduce soft costs by a little more than $4,000 per unit, holding other factors constant. For context, the average soft cost per unit during this time period was about $53,000. Examining Effects of Variables Not Available for All Allocating Agencies nonprofit developers do not expect to earn a return on investment, so they may be able to develop projects at lower cost. Nonprofit and for- profit developers also may select different kinds of projects, so it is possible that nonprofit developers more often pick projects that are more costly in observable and unobservable characteristics. Table 9 provides the results of total cost models estimated using the relevant allocating agency subsamples. In both the parking structure and LEED models, we included categories for missing information. The omitted category is the known absence of parking or LEED construction, respectively. Both of these subsamples were heavily weighted by California projects. The estimated effect of parking structures was quite large and statistically significant at the 1 percent level. Regardless of the true magnitude of the effect, projects in which parking structures were included clearly were likely to cost more. It is unlikely that all projects envision tenants with cars. For those that do, a surface parking option often may be feasible, but when it is not, project costs will be larger. LEED certification was associated with costs of about $19,000 more per unit than other projects, holding other factors constant. LEED projects represent about 18 percent of projects in which LEED status was clearly known. Most LEED projects were new construction, and only about 5 percent of the rehabilitation projects with known LEED status were built to LEED standards. Nonprofit set-aside provisions were associated with an increase in total cost per unit of about $15,000, controlling for other characteristics. Nonprofit set-aside projects had different characteristics from those of projects developed without nonprofit set-asides. For instance, nonprofit set-aside projects typically were smaller, more likely to be in older neighborhoods, less likely to be in low-rent neighborhoods, and less likely to receive Rural Development loans—characteristics we estimated to be associated with increases in total cost per unit. When we estimated the model shown in table 9, but without the set-aside indicator, and multiplied the coefficients by mean values of the explanatory variables calculated separately for each group, we calculated that per-unit costs for projects developed without the set-aside are about $220,000 and the estimated cost for projects developed with the set-aside are about $250,000. As shown in table 9, the fact that we estimated an increase in total cost per unit even while controlling for other factors suggests that unobserved factors may be important. For instance, as mentioned in the body of this report, nonprofit organizations may focus more on populations that are more costly to serve, such as special-needs tenants who may require additional or enhanced facilities. Examining Effects of Donated Land or Property estimations, the fits improved, providing some evidence that the excluded observations introduced some noise to the estimation. In table 11, we examined the effect of aggregating certain projects in New York City. In principle, observations in a regression should be independent from one another. When individual building-level observations appear to be parts of larger projects under common development, this condition is violated. In New York City, it appears that separate tax credit allocations were made to single-building projects in close proximity to other tax credit projects awarded to the same developers at the same time or in consecutive years. For example, three buildings being renovated by the same developer in the same relatively small area could be considered as three separate one-building projects or one three-building project. Clustering the single-building projects as one project for the model made very little difference in the estimates, but led to modest improvements in the overall fit of the model and reduced the number of observations because of the aggregation of projects. We also examined the association between LIHTC costs and the proximity of projects to public transit. Some allocating agencies offered incentives for the production of transit-oriented LIHTC developments— projects within 0.5 mile of a transit station. Research generally describes transit-oriented developments as compact, mixed-use, walkable neighborhoods located near transit facilities. These types of developments are intended to advance other policy goals, such as furthering opportunities for employment. We used the Department of Transportation’s Fixed-Guideway Transit Network database to identify the distance from each project to the nearest transit station (train and bus rapid transit). For this model specification, we restricted our estimates to projects within 2 miles of a transit station because not all transit agencies reported station locations to the Department of Transportation database—making our transit distance variable quite large for some projects. As shown in table 12, while we did not find that projects within 0.5 mile of a transit station had significantly different costs than those between 0.5 and 1 mile (the omitted category), we did find that per-unit construction costs were about $17,000 greater for transit-oriented developments, controlling for other characteristics. Finally, table 13 presents the mean values for our full project sample and base case model. Appendix III: Development Costs for LIHTC Projects Completed in 2011–2015, for 12 Allocating Agencies This appendix provides data on the development costs of Low-Income Housing Tax Credit (LIHTC) projects completed in 2011–2015 that received tax credits from 12 selected allocating agencies. Figure 14 shows how median per-unit costs for new construction and rehabilitation projects changed over that period for each allocating agency. Table 14 (new construction projects) and table 15 (rehabilitation projects) break down the median per-unit costs into hard and soft costs and their component parts. Tables 16 and 17 provide data on alternative cost measures—cost per-bedroom and per-square foot—although this information was not available for all 12 allocating agencies. All the cost data in this appendix are presented in 2015 dollars. For additional information on the cost categories we describe, see appendix I. Projects Completed in 2011–2015, for 12 2011 (dollars) 2012 (dollars) 2013 (dollars) 2014 (dollars) 2015 (dollars) Two Studies Identifying Associations between Project Characteristics and Per-Unit Cost California Two of the five studies we reviewed used statistical models to identify the association between project characteristics and per-unit cost. The authors of a 2014 study sponsored by several California agencies found that the median per-unit cost (excluding land costs) of 400 new construction projects approved for 4 percent or 9 percent LIHTCs in 2001–2011 was $276,000. Using a regression analysis to control for multiple characteristics, they found a variety of characteristics were associated with differences in per-unit costs. Similar to our results, the authors found that per-unit costs decreased as the number of units increased or as the unit size decreased. Projects with buildings that had four or more stories were also about 10 percent more expensive per-unit. The authors found higher land costs tended to indirectly increase construction costs, because developers responded by building taller and more often included structured parking—another cost driver. Also similar to our results, they estimated that senior projects were less costly than projects targeted to families (by about 18 percent), and projects from nonprofit developers were more expensive than projects from for-profit developers (by about 9 percent). The authors of the California study also reviewed characteristics that we did not. For example, they found that projects with a higher degree of construction quality, durability, and energy efficiency had higher costs. Local factors, such as design review and approval requirements, also added to per-unit total cost. While data limitations prevented the authors from comparing the cost of LIHTC projects to market-rate developments in a conclusive way, they found that the per-unit construction costs of LIHTC projects in their sample were within the 50th and 75th percentile of estimated costs for market-rate projects with similar height, area, location, and wages. Washington The authors of a 2009 study sponsored by the Washington State Department of Commerce reviewed 65 affordable multifamily housing projects, including 41 LIHTC projects that received funding from the state’s Housing Trust Fund in 2003–2009. The average per-unit cost of new construction projects was about $177,000. Similar to our results, about 62 percent of the cost was attributed to construction. Using a regression analysis to control for multiple characteristics, the authors found that projects financed with LIHTCs tended to be larger and more expensive than affordable non-LIHTC projects. Architect fees were most strongly associated with per-unit costs, because architect fees may have approximated the complexity of the projects’ designs. Similar to our results, they found higher costs among urban projects relative to rural ones. In contrast to our results, the authors did not find that per-unit costs decreased as the number of units increased. Rather, for new construction LIHTC projects in urban areas, per-unit construction costs increased as the number of units increased. According to the authors, the cost increases may have been due to amenities associated with larger urban projects, such as structured parking. The authors also noted several characteristics that were not associated with per-unit costs, including the presence of a special needs population or the developer type. Three Studies Comparing Cost Differences The remaining three studies we reviewed compared cost differences among groups, typically by comparing averages between exclusive categories (for example, senior and nonsenior projects). But they did not statistically control for characteristics that may have differed among projects. Colorado The authors of a 2016 study sponsored by the Colorado Housing and Finance Authority analyzed 247 LIHTC projects that applied for 4 percent or 9 percent LIHTCs in Colorado in 2011–2016. They found the average per-unit cost of new construction projects increased by about 32 percent during this period to about $258,000 in 2016. The authors noted that the increase may have stemmed from the decreasing size of projects in Colorado and the increasing cost of construction. The authors studied the characteristics of the highest- and lowest-cost projects and stated that only two characteristics (project size and year of application) were consistently different between the groups. For projects that received 9 percent credits, characteristics such as location, developer type, and tenant types did not consistently differ between the highest- and lowest-cost projects. The authors also conducted 25 interviews with architects, consultants, developers, and general contractors, who stated that the most significant contributor to cost increases was higher labor costs due in part to shortages among skilled laborers and federal prevailing wage requirements. In addition, developers stated that while affordable housing developers were more focused on the long-term durability of their projects than market-rate developers, hard costs were generally similar between affordable and market-rate projects. However, soft costs tended to be higher as a result of legal fees associated with LIHTC syndication. New Mexico (and Other States) The authors of a 2014 study sponsored by the New Mexico Housing Mortgage Finance Agency reviewed cost drivers across 259 new construction projects that received 9 percent LIHTCs in 2006–2013 from multiple allocating agencies—Arizona, Colorado, Nevada, New Mexico, Texas, and Utah. The authors found the average per-unit cost (including reserves) ranged from about $124,000 in Texas to about $199,000 in Colorado. In New Mexico, average per-unit costs generally decreased in 2007–2010 and then increased thereafter through 2013. Similar to our results, the authors found that hard and soft costs comprised about 65 and 35 percent of project costs, respectively, among the states. Although the authors of the New Mexico study did not use a statistical analysis that would have controlled for multiple differences among project characteristics, the authors reported differences in construction costs among several groups. Similar to our results, the authors found slightly lower per-unit construction costs among senior projects compared to nonsenior projects, and that the largest projects (60 units or more) were generally less costly than the smallest projects (30 units or fewer). In contrast to our results, they noted higher per-unit construction costs among rural projects compared to urban projects. Also in contrast to our findings, the authors did not find a difference in the per-unit construction costs of nonprofit and for-profit developers. Minnesota In a 2013 study, a research intern working for the Minnesota Housing Finance Agency reviewed the costs of 412 affordable housing projects that applied for agency financing in 2003–2012, including 216 LIHTC projects, to determine the extent to which costs changed in response to cost containment strategies. The author found that the average per-unit cost of new construction LIHTC projects in the Minneapolis-St. Paul metropolitan area was about $237,000. Similar to our results and those of the other studies we reviewed, the author estimated that construction costs comprised about 61 percent of LIHTC project costs. Also similar to our findings, the author found that the per-unit cost of all affordable new construction projects generally increased during the sample period while the per-unit cost of rehabilitation projects generally decreased. For LIHTC projects specifically, the per-unit cost decreased by about 8 percent compared to about an 18 percent decrease among non- LIHTC affordable projects in 2003–2012. The author noted that these decreases are important as they coincided with an increased focus by the housing agency on characteristics expected to have increased costs, such as green building standards. The author also noted that the housing agency previously found—in a separate study using its predictive cost model—that construction costs for the agency’s affordable housing projects were about 12 percent higher than estimates for similar market-rate projects in the same geographical area. Appendix VI: Cost-Management Approaches for Each Allocating Agency, as of 2017 This appendix provides information on cost-management approaches of allocating agencies, based on our review of qualified allocation plans (QAP) and related documents for 57 agencies as of 2017. The agencies were located in all 50 states, the District of Columbia, the 4 U.S. territories that received a Low-Income Housing Tax Credit (LIHTC) allocation in 2017 (Guam, the Northern Mariana Islands, Puerto Rico, and the U.S. Virgin Islands), and two suballocating agencies (Chicago and New York City). See table 29 for the name and location of each agency. We identified four main approaches that agencies used to manage project-development costs: cost limits, credit allocation limits, fee limits, and cost-based scoring criteria. Agencies implemented these approaches in various ways, as shown in table 30. In addition, the types and number of cost-management approaches employed by each agency varied, as shown in table 31. The quantity of approaches used by an agency is not necessarily indicative of the quality or effectiveness of an agency’s cost management, which we were unable to measure. Cost limits ● - - - - ● ● ● - - ● ● - - - - - ● ● ● ● ● - ● - - - ● - ● - ● - - ● ● - ● ● - ● ● ● ● ● ● ● - ● ● ● ● ● ● - ● ● ● - ● ● ● ● ● - ● ● ● - ● ● ● ● ● ● ● ● ● ● ● ● ● - ● ● ● ● ● The extent of each agency’s practices for each type of cost-management approach also varied, as shown in tables 32–35. Total development cost limits Fee Fee Appendix VII: Comments from the Internal Revenue Service Appendix VIII: Comments from the National Council of State Housing Agencies Appendix IX: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Steve Westley (Assistant Director), Cory Marzullo (Analyst in Charge), Stephen Brown, Heather Chartier, Farrah Graham, Brandon Kruse, John McGrail, John Mingus, Marc Molino, Ed Nannenhorn, Daniel Newman, and Barbara Roesmann made key contributions to this report.
Why GAO Did This Study LIHTCs encourage private investment in low-income rental housing and have financed about 50,000 housing units annually since 2010.The LIHTC program is administered by IRS and credit allocating agencies (state or local housing finance agencies). The program has come under increased scrutiny following reports of high or fraudulent development costs for certain LIHTC projects. GAO was asked to review the cost-efficiency and effectiveness of the LIHTC program. This report examines (1) development costs for selected LIHTC projects and factors affecting costs, (2) allocating agencies' oversight of costs, and (3) factors limiting assessment of costs. GAO compiled and analyzed a database of costs and characteristics for 1,849 projects completed in 2011–2015 (the most recent data available when compiled) from 12 allocating agencies. The agencies span five regions and accounted for about half of the LIHTCs available for award in 2015. GAO also reviewed the most recent allocating plans and related documents for 57 allocating agencies and reviewed federal requirements. What GAO Found GAO identified wide variation in development costs and several cost drivers for Low-Income Housing Tax Credit (LIHTC) projects completed in 2011–2015. Across 12 selected allocating agencies, median per-unit costs for new construction projects ranged from about $126,000 (Texas) to about $326,000 (California). Within individual allocating agencies, the variation in per-unit cost between the least and most expensive project ranged from as little as $104,000 per unit (Georgia) to as much as $606,000 per unit (California). After controlling for other characteristics, GAO estimates that larger projects (more than 100 units) cost about $85,000 less per unit than smaller projects (fewer than 37 units), consistent with economies of scale. Allocating agencies use measures such as cost and fee limits to oversee LIHTC development costs, but few agencies have requirements to help guard against misrepresentation of contractor costs (a known fraud risk). LIHTC program policies, while requiring high-level cost certifications from developers, do not directly address this risk because the certifications aggregate costs from multiple contractors. Some allocating agencies require detailed cost certifications from contractors, but many do not. Because the Internal Revenue Service (IRS) does not require such certifications for LIHTC projects, the vulnerability of the LIHTC program to this fraud risk is heightened. Weaknesses in data quality and federal oversight constrain assessment of LIHTC development costs and the efficiency and effectiveness of the program. GAO found inconsistencies in the types, definitions, and formats of cost-related variables 12 selected agencies collected. allocating agencies did not capture the full extent of a key indirect cost—a fee paid to syndicators acting as intermediaries between project developers and investors that IRS requires be collected. IRS does not require allocating agencies to collect and report cost-related data that would facilitate programwide assessment of development costs. Further, Congress has not designated any federal entity to maintain and analyze LIHTC cost data. Even without a designated federal entity, opportunities exist to advance oversight of development costs. In particular, greater standardization of cost data would lay a foundation for allocating agencies to enhance evaluation of cost drivers and cost-management practices. What GAO Recommends Congress should consider designating a federal agency to maintain and analyze LIHTC cost data. GAO also makes three recommendations to IRS to enhance collection and verification of cost data. IRS disagreed with the recommendations and said it lacked certain data collection authorities. GAO maintains the recommendations would strengthen program oversight and integrity and modified one of them to allow IRS greater flexibility in promoting data standards.
gao_GAO-18-44
gao_GAO-18-44_0
Background Health care practitioners prescribe opioid medications to treat pain and sometimes for other health problems, such as severe coughing. Opioid medications are available as immediate or extended release and in different forms, such as a pill, liquid, or a patch worn on the skin. Opioids slow down some processes of the body, such as breathing and heartbeat, by binding with certain receptors in the body. Opioid Use Disorders and MAT Over time, the body becomes tolerant to opioids, which means that larger doses of opioid medications are needed to achieve the same effect. People may use opioids in a manner other than as prescribed—that is, they can be misused. Because opioids are highly addictive substances, they can pose serious risks when they are misused, which can lead to addiction and death. Symptoms of an opioid use disorder include a strong desire for opioids, the inability to control or reduce use, and continued use despite interference with major obligations or social functioning. Another concern associated with prescribed opioids is the potential for diversion for illegal purposes, such as nonmedical use or financial gain. Research has shown that MAT—which combines behavioral therapy and the use of certain medications (methadone, buprenorphine, and naltrexone)—can be more effective in reducing opioid use and increasing retention (i.e., reducing dropouts) compared to abstinence based treatment—that is when patients are treated without medication. Three medications are currently approved by FDA for use in MAT for opioid use disorders—methadone, buprenorphine, and naltrexone. Methadone: Methadone is a full opioid agonist, meaning it binds to and activates opioid receptors to help prevent withdrawal symptoms and reduce drug cravings. It has a long history of use for the treatment of opioid dependence in adults. Methadone suppresses withdrawal symptoms during detoxification therapy, which involves stabilizing patients who are addicted to opioids by withdrawing them in a controlled manner. Methadone also controls the craving for opioids during maintenance therapy, which is ongoing therapy meant to prevent relapse and increase treatment retention. Methadone can be administered to patients as an oral solution or in tablet form. Buprenorphine: Buprenorphine is a partial opioid agonist, meaning it binds to opioid receptors and activates them, but not to the same degree as full opioid agonists. It reduces or eliminates opioid withdrawal symptoms, including drug cravings. It can be used for detoxification treatment and maintenance therapy. It is available for MAT for opioid use disorder in tablet form for sublingual (under the tongue) administration, in film form for sublingual or buccal (inside the cheek) administration, and as a subdermal (under the skin) implant. Naltrexone: Naltrexone is an opioid antagonist, meaning it binds to opioid receptors but does not activate them. It is used for relapse prevention following complete detoxification from opioids. Naltrexone prevents opioid drugs from binding to and activating opioid receptors, thus blocking the euphoria the user would normally feel. It also results in withdrawal symptoms if recent opioid use has occurred. It can be taken daily in an oral tablet form or as a once-monthly injection given in a doctor’s office. Authorized Settings for MAT Medications Two of the three medications used to treat opioid use disorders— methadone and buprenorphine—are drugs that carry a potential for misuse. Under the Controlled Substances Act (CSA), treatment involving these medications can take place in certain authorized settings: as part of federally regulated OTPs or in other settings, such as a physician’s office, within certain restrictions. OTPs. OTPs provide MAT, including methadone and buprenorphine, for people diagnosed with an opioid use disorder. Methadone may generally only be administered or dispensed within an OTP, as prescriptions for methadone cannot be issued when used for opioid use disorder treatment. Buprenorphine may be administered or dispensed within an OTP, or may also be prescribed by a qualifying practitioner who has received a waiver from SAMHSA. Naltrexone is not a controlled substance and can be used in OTPs and other settings. Office-Based and Other Settings. Under a Drug Addiction Treatment Act of 2000 (DATA 2000) waiver, practitioners may prescribe buprenorphine to up to 30 patients in the first year of their waiver, 100 patients in the second year, and up to 275 patients in the third year. Practitioners at the 275-patient level must meet additional qualifications and requirements. Naltrexone does not have similar restrictions. HHS Uses Grant Programs and Other Efforts to Expand Access to MAT for Opioid Use Disorders HHS has implemented five key efforts from 2015 through August 2017 that focus on expanding access to MAT for opioid use disorders. Four of these are grant programs, including programs focused on health centers or primary care practices in rural areas. Targeted Capacity Expansion: Medication Assisted Treatment – Prescription Drug and Opioid Addiction (MAT-PDOA). This grant program is administered by SAMHSA and provides funding to states to increase their capacity to provide MAT and recovery support services to individuals with opioid use disorders. Grant recipients are expected to identify a minimum of two high-risk communities within the state and partner with local government or community- based organizations to address the MAT-related treatment needs in these communities. Among other things, recipients are to use outreach and other engagement activities to increase participation in and access to MAT for diverse populations at risk for opioid use disorders. In August 2015, SAMHSA awarded 3-year grants to 11 states, under which each of the states will receive up to $1 million in each grant year. In September 2016, SAMHSA awarded 11 additional 3-year grants to other states. Total funding is expected to be up to $66 million for all 22 grants. SAMHSA announced the availability of up to 5 additional 3-year grants for fiscal year 2017. Applications for these grants of up to $2 million per year were due in July 2017 and as of August 2017 they had not been awarded. Substance Abuse Service Expansion Supplement to Health Centers. This grant program is administered by HRSA and provides funds for existing health centers to improve and expand their delivery of substance abuse services, including services with a specific focus on MAT for opioid use disorders in underserved populations. Health centers that receive these grants are required to increase the number of patients with health center-funded access to MAT for opioid use or for other substance abuse disorders treatment by adding at least one full-time substance abuse provider and supporting new or enhanced existing substance abuse services. HRSA awarded 2-year grants in March 2016 to 271 health centers. According to HRSA documents, total funding could be up to $200 million for all grants over 2 years. HRSA announced the availability of another set of grants to health centers for fiscal year 2017. Applications for these grants were due in July 2017, and as of August 2017 they had not been awarded. Increasing Access to Medication-Assisted Treatment in Rural Primary Care Practices. This grant program is administered by AHRQ and funds demonstration research projects that aim to expand access to MAT for opioid use disorders in primary care practices in rural areas of the United States. Grant recipients are expected to recruit and engage primary care providers and their practices, provide training, and support physicians and their practices in initiating treatment. The program also identifies and tests strategies for overcoming the challenges associated with implementing MAT in primary care settings and creates training and other resources for implementing MAT. AHRQ awarded these 3-year grants of up to $1 million per year to four recipients—the recipients are teams of state health departments, academic health centers, local community organizations, physicians, and others—with project start dates of September 30, 2016. According to AHRQ documents, total funding is expected to be up to $12 million for the four grants over 3 years. State Targeted Response to the Opioid Crisis Grants (Opioid STR). This grant program is administered by SAMHSA and provides funding to states and others to increase access to treatment services for opioid use disorders, including MAT; reduce unmet treatment needs; and reduce opioid overdose deaths. Grant recipients are expected to implement or expand access to evidence-based practices, particularly the use of MAT, and to report on the number of people who receive opioid use disorder treatment, the number of providers implementing MAT, and the number of providers trained to use MAT. SAMHSA awarded 2-year grants starting in May 2017 to 50 states, the District of Columbia, four U.S. territories and the free associated states of Micronesia and Palau. According to SAMHSA documents, total funding could be up to $970 million for all grants over 2 years. Figure 1 displays the implementation timeframes, the number of grants, and funding levels for the four HHS grant programs related to MAT. As the figure shows, some of these awards were made in fiscal year 2015, while others were made as recently as May 2017. As of August 2017, these efforts were ongoing. In addition to these four grant programs, HHS’s fifth key effort increases treatment capacity by expanding the waivers that practitioners may receive to prescribe buprenorphine. Specifically, SAMHSA issued a regulation that became effective August 8, 2016 increasing the number of patients that eligible practitioners can treat with buprenorphine outside of an OTP (e.g., in an office-based setting). Previously, qualified practitioners could request approval to treat up to 30 patients at a time, and after 1 year the limit could increase to 100 patients at a time upon SAMHSA approval. The new regulation expanded access to MAT by allowing eligible practitioners who have had waivers to prescribe buprenorphine to 100 patients for at least 1 year to request approval to treat up to 275 patients thereafter. Similarly, SAMHSA has implemented provisions of the Comprehensive Addiction and Recovery Act of 2016 (CARA) that expanded the types of practitioners who can receive a waiver to prescribe buprenorphine in an office-based setting to include qualifying nurse practitioners and physician assistants. CARA generally requires that these nurse practitioners and physician assistants complete 24 hours of training to be eligible for a waiver. According to HHS documents, as of early 2017, nurse practitioners and physician assistants who have completed this training could request a waiver from SAMHSA to treat up to 30 patients at a time. In addition to its five key efforts focused specifically on expanding access to MAT for opioid use disorders, HHS has other efforts with broader focuses, such as treating multiple types of substance abuse. While these efforts are not specifically focused on expanding access to MAT for opioid use disorders, they may result in expanded access to MAT. For example, CMS has approved section 1115 Medicaid demonstration projects to allow states to undertake comprehensive reforms of their delivery of substance abuse services, including provisions to enhance the use of MAT for opioid use disorders. In July 2015, CMS issued a state Medicaid Director letter informing states that they may seek approval of section 1115 demonstrations to undertake comprehensive substance use service reforms. According to CMS, all participating states are using the demonstration authority to develop a full continuum of care for individuals with substance abuse disorders, including coverage of short-term residential treatment services not otherwise covered by Medicaid. In addition, FDA has programs to help expedite development and to provide for faster review of marketing applications for certain drugs. According to FDA, it has conducted expedited reviews of Suboxone (buprenorphine and naloxone sublingual film), Vivitrol (extended release naltrexone injection) and Probuphine (buprenorphine subdermal implant). According to some federal officials and other stakeholders that we interviewed, as part of efforts to expand access to MAT for opioid use disorder, steps are being taken to prevent the possibility that the MAT medications could, in some cases, be diverted for illicit use, misuse, or for purposes not intended by a prescriber. For example, OTPs and practitioners who request and receive a waiver to prescribe buprenorphine to treat up to 275 patients outside of an OTP setting are required under federal regulations to maintain a diversion control plan. In addition, the MAT-PDOA grant program explicitly requires grant recipients to implement a diversion control plan, though the other grant programs do not have similar additional requirements. (See appendix I for an overview of the diversion control plan requirements for OTPs and the practitioners who prescribe buprenorphine outside of an OTP.) The 2016 Surgeon General’s report on Alcohol, Drugs, and Health noted that decades of research have shown that the benefits of MAT greatly outweigh the risks associated with diversion, and that withholding these medications greatly increases the risk of relapse to illicit opioid use and overdose death. HHS is Finalizing Its Approach for Evaluating MAT Expansion Efforts but Lacks Performance Measures with Targets and Implementation Timeframes HHS officials told us that as of August 2017, the department is in the process of finalizing its approach for evaluating the implementation of its agencies’ collective efforts to address the opioid epidemic that were undertaken as part of the HHS Opioid Initiative and will continue under the new administration’s Opioid Strategy. HHS officials provided a draft of the evaluation’s schedule. According to the officials, the evaluation will include, but not be limited to, efforts to expand access to MAT. In September 2016, HHS awarded a 2-year contract to Research Triangle Institute International (RTI) to evaluate HHS agencies’ collective efforts. HHS officials told us that they are still working with RTI to finalize the evaluation approach given new leadership priorities. Specifically, in April 2017, the new Secretary of HHS announced a revised strategy for addressing the opioid epidemic that will continue to address access to MAT for opioid use disorders but also include additional priority areas. According to HHS officials, to be responsive to the new priorities, the evaluation will focus initially on whether HHS’s efforts have been implemented as intended, and officials expect the evaluation to also provide information on any challenges HHS has faced in implementing these efforts. According to HHS officials, while the evaluation of MAT expansion efforts will use information from several sources, they have not yet determined exactly which information will be used or how it will be used. This information may include, for example, results from a separate, planned evaluation of one of the grant programs, Opioid STR, as well as other information HHS agencies collect as part of their ongoing monitoring efforts for each of their individual MAT grant programs. While the reporting requirements vary across the four MAT grant programs, the grantees provide HHS with information related to expanding access to MAT. Specifically, Targeted Capacity Expansion: Medication Assisted Treatment – Prescription Drug and Opioid Addiction (MAT-PDOA): Every 6 months, grant recipients are expected to submit progress reports to SAMHSA on the planned and actual number of patients treated, as well as information on other performance measures. Increasing Access to Medication-Assisted Treatment in Rural Primary Care Practices: Grant recipients are expected to submit quarterly progress reports to AHRQ with various information, such as information on the number of physicians who have been certified to prescribe buprenorphine and the number of primary care practices successfully initiating the delivery of MAT services as a result of the grant project. Substance Abuse Service Expansion Supplement to Health Centers: Health centers that received these grants were expected to submit quarterly progress reports to HRSA through the second quarter of 2017 on the number of physicians who have obtained a DATA 2000 waiver and the number of patients who received MAT from these physicians. Health centers must now report these data elements in their annual performance reporting along with information on the number of certified nurse practitioners and physician assistants who have received a DATA 2000 waiver. State Targeted Response to the Opioid Crisis Grants (Opioid STR): Every 6 months, grant recipients are expected to submit progress reports to SAMHSA on the number of individuals who receive opioid use disorder treatment, the number who receive opioid use disorder recovery services, and the number of providers implementing MAT, among other measures. While HHS’s evaluation will focus on whether HHS’s efforts have been implemented as intended, officials told us that in the future an evaluation may also focus on the effectiveness of these efforts, including the effectiveness of efforts to expand access to MAT. Doing so would be consistent with federal standards for internal control, which call for agencies to evaluate results. HHS has some of the information that could be used in a future evaluation of the effectiveness of its efforts to expand access to MAT. In particular, an HHS document describing the department’s fiscal year 2016 – 2017 goals identifies expanding MAT access as an important strategy for the success of HHS’s longer-term goal of reducing opioid use disorders and opioid overdoses. In addition, HHS has identified three potential ways to measure access to MAT: the number of prescriptions for MAT medications, the treatment capacity of practitioners who are authorized to prescribe buprenorphine for opioid use disorders through a DATA 2000 waiver, and the treatment capacity of OTPs certified to administer methadone and other medications. In addition, HHS has data that could be useful for tracking progress in these areas (see table 1). However, HHS has not adopted specific performance measures with targets specifying the magnitude of the increases HHS hopes to achieve through its efforts to expand access to MAT, and by when. For example, HHS has not established a long-term target specifying the percentage increase in the number of prescriptions for buprenorphine HHS would like to achieve, which would help to show whether efforts by HHS and others are resulting in sufficient progress in increasing prescriptions for this MAT medication. HHS has also not chosen a specific method of measuring treatment capacity or established targets associated with it, which would help to show whether a sufficient number of providers are becoming available to evaluate and treat patients who may benefit from MAT. Without specifying these performance measures and associated targets, HHS will not have an effective means to determine whether its efforts are helping to expand access to MAT. The lack of such performance measures with associated targets is inconsistent with federal internal control standards that specify that management should define objectives and evaluate results. According to these standards, using performance information such as performance measures can help agencies monitor results and determine progress in meeting program goals. In the context of HHS’s efforts to expand access to MAT, establishing appropriate performance measures with associated targets would allow HHS to determine whether its efforts are making sufficient progress or whether they need to be improved. Gauging this progress is particularly important, given the large nationwide MAT treatment gap identified in 2015 between the total number of individuals who could benefit from MAT and the limited number who can access it based on provider availability. This gap was estimated at nearly 1 million people as of 2012, and according to HHS officials and other stakeholders, lack of providers continues to be a challenge. Until HHS establishes performance measures with associated targets for the factors related to access to MAT, the department will be unable to evaluate its progress expanding access to MAT for opioid use disorders. In addition, as of August 2017, HHS has not finalized its approach for the planned evaluation activities, including timeframes. ASPE officials said that timeframes for a finalized evaluation approach had not been established because they were still working with RTI to finalize the evaluation approach given the new leadership priorities. When we spoke with the officials, they provided us with a draft evaluation schedule that covered the contract period ending September 2018. As of October 2017, HHS had not provided a finalized evaluation approach or schedule. Federal internal controls call for management to establish and operate monitoring activities and evaluate results. Without an implementation timeframe for the evaluation’s activities, HHS increases the risk that its evaluation of its agencies’ efforts will not be completed as expeditiously as possible, including an evaluation of HHS’s efforts to expand access to MAT. Selected Stakeholders Reported Using Outreach, Training, and Other Efforts to Help Expand Access to Medication- Assisted Treatment for Opioid Use Disorders Officials from selected state health departments and behavioral health agencies, private health insurers, and national associations reported using several different efforts to help expand patients’ access to MAT for opioid use disorders. All of the stakeholders we interviewed reported conducting outreach efforts to communicate information about the importance of MAT and how to access it, or providing training to educate providers on prescribing MAT medications. Efforts by states. State health officials we spoke to described several planned or ongoing efforts to expand access to MAT, some of which are supported by federal funding, including federal grant programs. Officials from all five selected states told us that they are offering outreach to and training for providers to help expand access to MAT. For example, several state officials told us that they are promoting training to (1) encourage physicians to obtain authorization (DATA 2000 waivers) to prescribe buprenorphine and (2) encourage physicians with waivers to treat patients up to their patient limit or to request a higher patient limit. According to the stakeholders, all five selected states have implemented or are planning to implement a health care delivery model or approach that will expand access to MAT. Specifically, these models or approaches focus on integrating the use of MAT into primary care settings. For example, health officials from three states described use of a hub-and-spoke model. This model generally involves centralized intake and initial management of patients at a “hub” (e.g., an OTP) and then connecting these patients to community providers at “spokes” (e.g., primary care clinics) for ongoing care, with ongoing support provided by the hub as needed. Additionally, officials from two states described offering remote MAT-related consultations through telehealth that connects patients in rural areas with addiction specialists. According to a 2017 Healthcare Fraud Prevention and Partnership whitepaper, telehealth expands the reach of the addiction professional workforce and the existing pool of MAT providers, and it supports remote forms of behavioral therapy to make trained professionals more accessible to those in underserved or isolated communities. Officials from three states described focusing their MAT expansion efforts in various settings, such as in the criminal justice setting and emergency room departments. State health officials from four of the five states told us that programs in their states are using peer specialists (individuals who have successfully recovered from substance abuse disorders) in emergency rooms and other settings to engage with addicted patients and refer them to addiction specialists or behavioral health counselors. Officials from the selected states said that some of these and other efforts are funded through federal sources, such as MAT expansion grants awarded by SAMHSA, or with state funds to the extent they are available. Efforts by private health insurers. Officials from private health insurers reported that they are expanding access to MAT through outreach or training for providers and through the following three efforts: Eliminating the need for prior authorization to prescribe MAT medications. Officials from three insurers reported removing prior authorization requirements for MAT medications, thereby making it easier for patients to access needed MAT medications more readily, rather than undergoing a waiting period for approval to receive the medications. Other private health insurers told us that they continue to require prior authorization, intended for safety reasons and to reduce drug misuse, and officials from one insurer told us that they will allow a patient to access a limited amount of MAT medications for a period of 24 to 72 hours while making a determination about the appropriate treatment services for the patient. Modifying health benefit coverage. Officials from one private health insurance plan told us that the company is redesigning the benefit coverage for methadone and has removed member copays. This effort is intended to make MAT medications more affordable and allow members who are not able to use buprenorphine to have an alternative, such as methadone, that is not cost-prohibitive. Incentivizing providers and health insurance plan members to use MAT. Officials from four private health insurance plans described plans to offer incentives to providers or patients to use MAT. For example, officials from three health plans stated that they are offering alternative payment models or paying higher rates to providers that offer MAT, and another private health insurer is offering incentives to its members who are enrolled in behavioral health programs that provide access to MAT. Efforts by national associations. Officials we interviewed from the national associations—including the American Society of Addiction Medicine, the National Governors Association, and the Association of State and Territorial Health Officials—told us that they are helping to expand access to MAT through outreach and training for their members and by developing tools and resource guides for their members. An official from one association told us that it shares federal grant announcements, including those that are focused on expanding access to MAT, with its members. Officials from another association said it provides training to providers on how to appropriately prescribe MAT medications. In addition, officials from one association told us that they developed an opioid-related road map that identifies examples of strategies—including MAT—that state policymakers can use in their ongoing efforts to address the opioid epidemic. Examples of strategies include reducing the stigma associated with MAT through educating the public and potential providers. Another strategy in the road map is changing payment policies to expand access to MAT services, such as ensuring that Medicaid and other state health programs adequately cover all MAT medications and behavioral interventions and encouraging or requiring commercial health plans to adopt similar policies. Conclusions HHS funds grant programs and has taken other steps to expand access to MAT, which has been shown to be effective in reducing the prevalence of opioid use disorders and with them, the likelihood of drug overdoses. HHS’s Opioid Initiative began in 2015, and the grants that support it are ongoing, so it is likely too early to determine how effective HHS’s efforts have been in expanding access to MAT and in meeting HHS’s other priorities related to addressing the opioid epidemic. According to HHS, access to MAT can be measured in terms of the number of prescriptions for MAT and by the treatment capacities of OTPs and practitioners who are authorized to prescribe buprenorphine. Our review suggests, however, that HHS may not be ready to perform this evaluation. While HHS told us that it may evaluate the effectiveness of its efforts in the future, the department has not established performance measures with targets that would specify the results that HHS hopes to achieve through its efforts, and by when. Furthermore, HHS has not established timeframes for the activities that will make up its planned evaluation of whether HHS’s efforts have been implemented as intended. Without performance measures with targets and evaluation timeframes, HHS increases the risk that the evaluation will not be completed in a timely manner or that HHS will not know whether its MAT- related efforts are successful or whether new approaches are needed. The evaluation is particularly important, given the hundreds of millions of dollars HHS has invested in its MAT-related grant programs. Recommendations for Executive Action We are making the following two recommendations to HHS. The Assistant Secretary for Planning and Evaluation should establish performance measures with targets related to expanding access to MAT for opioid use disorders. (Recommendation 1) The Assistant Secretary for Planning and Evaluation should establish timeframes in its evaluation approach that specify when its evaluation of efforts to expand access to MAT will be implemented and completed. (Recommendation 2) Agency Comments We provided a draft of this report to HHS for review, and HHS provided written comments, which are reprinted in appendix II. HHS also provided technical comments, which we incorporated as appropriate. In its written comments, HHS concurred with both of our recommendations. Specifically, for our first recommendation to establish performance measures with targets related to expanding access to MAT for opioid use disorders, HHS stated that developing such measures is appropriate and that the department will continue to work to develop robust performance measures, including measures related to MAT, as part of its overall Opioid Strategy, which includes the department’s most recent efforts to address the opioid epidemic. For our second recommendation to establish timeframes in its evaluation approach that specify when its evaluation of efforts to expand access to MAT will be implemented and completed, HHS agreed that timeframes are important to any evaluation. HHS noted that its evaluation is being conducted under a 2-year contract that is scheduled to end in September 2018. HHS has also provided us with a draft evaluation schedule. We clarified in our report, however, that HHS has not yet provided a finalized approach for the planned evaluation or a finalized schedule establishing timeframes for the activities that will make up the evaluation. Until it finalizes its evaluation approach and establishes related timeframes, HHS increases the risk that it will not complete its planned evaluation by September 2018. We are sending copies of this report to the HHS, and appropriate congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-7114 or curdae@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix III. Appendix I: Diversion Plan Requirements for Opioid Treatment Programs and Practitioners Who Prescribe Buprenorphine According to the Department of Health and Human Services, a diversion control plan is a set of documented procedures intended to reduce the possibility that controlled substances will be transferred or used illicitly. Opioid treatment programs (OTPs) and practitioners who prescribe buprenorphine at the highest patient level through a Drug Addiction Treatment Act of 2000 (DATA 2000) waiver are required to have these plans. OTPs are programs that may administer or dispense medication- assisted treatment (MAT) for people diagnosed with an opioid use disorder, including the use of methadone and buprenorphine. In addition, under a DATA 2000 waiver, practitioners may prescribe buprenorphine for patients, up to a 30-, 100-, or 275 patient limit. Diversion Control Plan Requirement for OTPs An OTP must maintain a current diversion control plan that contains specific measures to reduce the possibility of diversion of controlled substances from legitimate treatment use. Per federal guidelines, the goal of the diversion control plan is to reduce the scope and significance of diversion and its impact on communities. The guidelines state that each OTP’s diversion control plan should make every effort to balance diversion control against the therapeutic needs of the individual patient. They also state that diversion control plans should address at least four general areas of concern: program environment, dosing and take-home medication, prevention of multiple program enrollment, and prescription medication misuse. The guidelines include details about each of these areas: Program environment: Diversion in the program environment can be deterred and detected by regular surveillance and the monitoring of areas in and around the program, where opportunities for diversion may exist. A visible human presence at a program’s location gives community members the opportunity to approach staff with concerns and communicates the program’s commitment to assuring a safe environment and a positive impact on the surrounding community. Dosing and take-home medication: In the area of dosing and take- home medication, diversion control encompasses careful control of inventory, attentive patient dosing, and close supervision of take- home medication. Observing a patient take his or her dose and having each of them drink and speak after dosing are fundamental components of diversion control. Take-home dosing should be provided with careful attention to regulatory compliance and the therapeutic benefit and safety these regulations are meant to promote. Prevention of multiple program enrollment: Reasonable measures should be taken to prevent patients from enrolling in treatment provided by more than one clinic or individual practitioner. An OTP, after obtaining patient consent, may contact other OTPs within a reasonable geographic distance (100 miles) to verify that a patient is not enrolled in another OTP. Misuse of prescription medication: The misuse of prescription medication has become an area of great concern nationally and impacts diversion control planning at OTPs. All OTP physicians and other healthcare providers, as permitted, should register to use their respective state’s prescription drug monitoring program (PDMP) and query it for each newly admitted patient prior to initiating dosing. The PDMP should be checked periodically (for example, quarterly) through the course of each individual’s treatment and, in particular, before ordering take-home doses as well as at other important clinical decision points. Diversion Control and Plans for Practitioners Prescribing Buprenorphine Outside of an OTP SAMHSA’s best-practice guidelines for using buprenorphine for treating opioid use disorders include multiple references to diversion, including monitoring for diversion, storage of this medication to minimize diversion, and use of formulations that may be less likely to be diverted. Specifically, the best practices state that, when possible, practitioners should use the combination buprenorphine/naloxone product, which increases safety and decreases the likelihood of diversion and misuse. Further, physicians who request and receive a waiver to prescribe buprenorphine to treat up to 275 patients outside of an OTP are required to have a diversion control plan. According to an HHS official, as of July 13, 2017, roughly 3,330 of the over 39,000 practitioners with a waiver had a 275-patient limit waiver. The majority of these practitioners, just over 27,000, have a 30-patient limit. According to SAMHSA guidance, the diversion plan should contain specific measures to reduce the possibility of diversion of buprenorphine from legitimate treatment use and should assign specific responsibilities of the medical and administrative staff of the practice setting for carrying out these measures. Further, the guidance states that the plan should address how: the environment at the practice setting can prevent onsite diversion; to prevent diversion with regard to dosing and take-home medication; and to prevent patients from receiving a prescription from more than one practitioner and later diverting some of the prescribed medication. Appendix II: Comments from the Department of Health and Human Services Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Elizabeth H. Curda, Director, (202) 512-7114 or curdae@gao.gov. Staff Acknowledgments In addition to the contact name above, Will Simerl, Assistant Director; Natalie Herzog, Analyst-in-Charge; La Sherri Bush; and Emily Wilson made key contributions to this report. Also contributing were Muriel Brown, Krister Friday, Sandra George, and Christina Ritchie.
Why GAO Did This Study The misuse of prescription opioid pain relievers and illicit opioids, such as heroin, has contributed to increases in overdose deaths. According to the most recent Centers for Disease Control and Prevention data, in 2015 over 52,000 people died of drug overdose deaths, and about 63 percent of them involved an opioid. For those who are addicted to or misuse opioids, MAT has been shown to be an effective treatment. GAO was asked to review HHS and other efforts related to MAT for opioid use disorders. This report (1) describes HHS's key efforts to expand access to MAT, (2) examines HHS's evaluation, if any, of its efforts to expand access to MAT, and (3) describes efforts by selected stakeholders (states, private health insurers, and national associations) to expand access to MAT. GAO gathered information from HHS officials as well as a non-generalizable selection of 15 stakeholders selected based on their MAT expansion activities, among other factors. GAO also assessed HHS's evaluation plans using internal control standards for defining objectives and evaluating results. What GAO Found In an effort to reduce the prevalence of opioid misuse and the fatalities associated with it, the Department of Health and Human Services (HHS) established a goal to expand access to medication-assisted treatment (MAT). MAT is an approach that combines behavioral therapy and the use of certain medications, such as methadone and buprenorphine. HHS has implemented five key efforts since 2015 that focus on expanding access to MAT for opioid use disorders—four grant programs that focus on expanding access to MAT in various settings (including rural primary care practices and health centers) and regulatory changes that expand treatment capacity by increasing patient limits for buprenorphine prescribers and allowing nurse practitioners and physician assistants to prescribe buprenorphine. Some of the grant awards were made in 2015, while others were made as recently as May 2017. (See figure.) As of August 2017, efforts under all the grant programs were ongoing. Grant recipients can use funding to undertake a range of activities, such as hiring and training providers and supporting treatments involving MAT. In addition, certain providers and grant recipients are required to develop plans for preventing MAT medications from being diverted for nonmedical purposes. HHS officials told GAO that as of August 2017, the department was in the process of finalizing its plans to evaluate its efforts to address the opioid epidemic. In September 2016, HHS awarded a contract to conduct the evaluation. HHS officials told GAO that they are still working with the contractor to finalize the evaluation approach and that it will focus on whether HHS's efforts to address the opioid epidemic have been implemented as intended. HHS officials said that in the future, HHS may also evaluate whether, or to what extent, its efforts have been effective in expanding access to MAT, in addition to evaluating implementation. While HHS has some of the information that could be used in a future evaluation of the effectiveness of its efforts to expand access to MAT, it has not adopted specific performance measures with targets specifying the magnitude of the increases HHS hopes to achieve through its efforts to expand access to MAT, and by when. For example, HHS has not established a long-term target specifying the percentage increase in the number of prescriptions for buprenorphine HHS would like to achieve, which would help to show whether efforts by HHS and others are resulting in a sufficient number of prescriptions for MAT medications. HHS has also not chosen a specific method of measuring treatment capacity or established targets associated with it, which would help determine whether a sufficient number of providers are becoming available to evaluate and treat patients who may benefit from MAT. Without specifying these performance measures and associated targets, HHS will not have an effective means to determine whether its efforts are helping to expand access to MAT or whether new approaches are needed. Gauging this progress is particularly important given the large gap identified nationwide between the total number of individuals who could benefit from MAT and the limited number who can currently access it based on provider availability. In addition, GAO also found that as of August 2017, HHS had not finalized its approach for its planned evaluation activities, including timeframes. Without timeframes for the evaluation's activities, HHS increases the risk that the evaluation will not be completed as expeditiously as possible. In addition to HHS efforts to expand access to MAT, officials from selected states, private health insurers, and national associations reported using several efforts to expand patients' access to MAT for opioid use disorders. For example, several stakeholders provided GAO with the following examples of their efforts: States. State health officials from all five selected states have implemented or are planning approaches that focus on integrating the use of MAT into primary care, such as by providing services for centralized intake and initial management of patients or through telehealth that connects patients in rural areas with addiction specialists in a different location. Private health insurers. Three private health insurers reported removing prior authorization requirements for MAT medications so patients can avoid a waiting period before receiving the medications. National associations. Officials told GAO that they are conducting outreach and training for their members and developing tools and resource guides. For example, one association developed a road map with strategies that state policymakers can use to address the opioid epidemic, including strategies for reducing the stigma associated with MAT through educating the public and potential providers. What GAO Recommends GAO recommends that HHS take two actions: (1) establish performance measures with targets related to expanding access to MAT, and (2) establish timeframes for its evaluation of its efforts to expand access to MAT. HHS concurred with both recommendations.
gao_GAO-18-161T
gao_GAO-18-161T_0
Background RPS are long-lived sources of spacecraft electrical power and heating that are rugged, compact, highly reliable, and relatively insensitive to radiation and other effects of the space environment, according to NASA documentation. Such systems can provide spacecraft power for more than a decade and can do so billions of miles from the sun. Twenty-seven U.S. missions have used RPS over the past 5 decades. The current RPS design, the Multi-Mission Radioisotope Thermoelectric Generator (MMRTG), converts heat given off by Pu-238 into about 120 watts of electrical power at the beginning of its life—a 6 percent power conversion efficiency. One MMRTG contains 32 general purpose heat source (GPHS) fuel clads in the form of pressed Pu-238 pellets encapsulated in iridium. NASA’s PSD science portfolio includes a wide array of missions that seek to address a variety of scientific objectives and answer many questions about the solar system, from how life began to how the solar system is evolving. Scientific and mission objectives influence the types of equipment needed for the mission, including the mission’s power source. According to NASA officials we interviewed, missions in NASA’s PSD portfolio are generally classified in three ways: Flagship. Flagship missions are the largest and most expensive class of NASA’s missions, costing $2 billion or more, and are given the highest priority for resources, including funding, infrastructure, and launch support. Past Flagship missions that have used RPS include the Galileo, Cassini, and Curiosity missions. NASA’s Mars 2020 mission is a planned Flagship mission using RPS. New Frontiers. New Frontiers missions focus on enhancing our understanding of the solar system and have a development cost cap of $850 million. To date, there has been one New Frontiers mission using RPS (New Horizons). Discovery. Missions in the Discovery program have a development cost cap of $450 million to $500 million and have shorter development time frames, according to NASA officials and documentation. No Discovery mission has been powered by RPS. DOE oversees the design, development, fabrication, testing, and delivery of RPS to meet NASA’s overall systems requirements, specifications, and schedules. DOE’s goal under its Supply Project is to reach a full Pu-238 production rate of 1.5 kg per year by 2023, at the earliest, with a late completion date of 2026. DOE also established an interim production rate of 300 to 500 grams per year by 2019, to ensure an adequate supply of Pu-238 for NASA’s near-term missions, before the full production rate goal is achieved. The Supply Project involves a number of steps across several DOE national laboratories, including the use of two DOE research reactors—the High Flux Isotope Reactor at ORNL, and the Advanced Test Reactor at INL. NASA began fully funding DOE’s Supply Project in 2011, and since 2014, has been responsible for funding all aspects of RPS production operations, according to NASA documents. NASA funds DOE’s efforts to build, test, and fuel RPS, as well as to update equipment and sustain staffing levels associated with RPS production between missions. Since 2014 NASA has provided, on average, approximately $50 million per year to support DOE’s ongoing operations and maintenance of RPS production equipment. Since its inception until early 2017, DOE has used a short-term and incremental segmented management approach to manage the Supply Project. NASA Selects RPS for Missions Based Primarily on Scientific Objectives, and Several Factors May Affect NASA’s Demand for RPS and Pu-238 NASA selects RPS to power its missions primarily based on scientific objectives and mission destinations. According to NASA officials we interviewed, the need for RPS is usually apparent based on the mission’s scientific objectives and destination. For instance, an RPS is more likely to be needed for a mission to a distant planet, where minimal sunlight reduces the effectiveness of solar power. NASA officials we interviewed stated that, consistent with the National Space Policy, the agency uses RPS when they enable or significantly enhance a mission, or when alternative power sources, such as solar power, might significantly compromise mission objectives. NASA prioritizes mission selection based on missions identified in the National Academy of Sciences’ decadal survey report, which represents the highest priorities of the scientific community and includes many missions that require the use of RPS. Prior to the establishment of DOE’s Supply Project in fiscal year 2011, NASA officials we interviewed stated that mission selections were influenced by the limited amount of available Pu-238. These same officials told us that missions are now selected independently from decisions about how they will be powered. However, projected availability of Pu-238 is factored into whether an RPS is available for a specific mission opportunity. In addition to the scientific objectives of planned and potential space exploration missions, several other factors may affect NASA’s demand for RPS and Pu-238: Costs associated with missions that typically require RPS. According to NASA officials, RPS have typically been used on Flagship missions that cost $2 billion or more. NASA can support no more than one mission using RPS about every 4 years—or two to three missions per decade—based on expected agency funding levels. Cost of RPS relative to mission costs. According to NASA officials, New Frontiers missions may be good candidates to use RPS; however, given the cost cap for this mission class, one RPS would account for about 9 percent of the mission’s budget, while three RPS would account for almost 14 percent. For Discovery missions, for which the cost of using RPS would represent a large portion of a Discovery mission budget, a single RPS would represent more than 17 percent of a mission’s development cap. DOE’s production capability. According to DOE officials we interviewed, it can take up to 6 years to acquire, fuel, test, and deliver a new RPS for a NASA mission. According to DOE and NASA officials we interviewed, given the current floor space dedicated to RPS development at INL and limits on staff exposure to radiation at LANL, DOE only has the capacity to produce three to four RPS at one time. To accommodate DOE’s current RPS production capability, NASA officials we interviewed said they will not select two consecutive missions requiring RPS. Technological advances may reduce the demand for Pu-238 and thus RPS. For example, according to NASA officials, advances in solar power technology have realistically expanded the ability to use solar power for missions for which it would not have been considered before, and these advances could help address low levels of light intensity for deep space missions. NASA also is developing new RPS technologies that may reduce its demand for Pu-238 and thus RPS. For example, NASA officials told us that they plan to invest in dynamic RPS technology that could increase RPS efficiency and require less RPS to achieve mission power. NASA research indicates that dynamic RPS designs could be more than four times as efficient as the current MMRTG design. The Supply Project goal of producing 1.5 kg of Pu-238 per year was established to support two to three PSD missions using RPS each decade, and NASA does not anticipate other potential users to affect demand for RPS or Pu-238, according to NASA and DOE officials and documentation we reviewed. DOE planning documents and NASA officials we interviewed stated that current RPS and Pu-238 production levels expected from the Supply Project are intended to only meet PSD’s demand. NASA officials said that they did not account for potential demand from other potential users within NASA, the national security community, or commercial sectors when establishing current Pu-238 production goals. DOE Has Made Progress Meeting NASA’s RPS and Pu- 238 Demand, but Faces Challenges Reaching Full Production Goals DOE has made progress meeting NASA’s future demand for Pu-238 to fuel RPS. A chronology of key DOE planned RPS and Pu-238 production activities, and NASA’s mission-related activities, are shown in figure 1. DOE demonstrated a proof of concept for new Pu-238 production, and has made approximately 100 grams of new Pu-238 isotope under its Supply Project, since the project’s inception in 2011. However, given DOE’s Supply Project and RPS production schedule, and NASA’s current space exploration plans to use up to four RPS for its Mars 2020 and New Frontiers #4 missions, DOE’s existing Pu-238 supply will be exhausted by 2025. Moreover, DOE officials we interviewed from INL, LANL, and ORNL identified several challenges, including perfecting and scaling up chemical processing and the availability of reactors, that need to be overcome for DOE to meet its projected Supply Project goal of producing 1.5 kg per year of Pu-238 by 2026, at the latest. If these challenges are not overcome, DOE could experience delays in producing Pu-238 to fuel RPS for future NASA missions. DOE’s ability to meet its production goal and support future NASA missions is at risk if certain steps for chemical processing necessary for the production of Pu-238 are not improved and scaled up. According to DOE officials we interviewed, DOE is still in the experimental stage and has not perfected certain chemical processing measures required to extract new Pu-238 isotope from irradiated targets, creating a bottleneck in the Supply Project and putting production goals at risk. In addition, reactor availability will be necessary for DOE to achieve its Pu-238 production goals. Officials we interviewed at INL and ORNL said that achieving 1.5 kg of Pu-238 per year is contingent on the availability of positions within both the High Flux Isotope Reactor (HFIR) and the Advanced Test Reactor (ATR) to irradiate neptunium targets for conversion to Pu-238 isotope. DOE officials said HFIR can produce approximately 600 grams of Pu-238 isotope and they plan to use positions within ATR to achieve full production goals; however, ATR has not been qualified for Supply Project work. In addition, DOE officials said that ATR’s availability for the Supply Project may be limited due to competition from other users. DOE officials said that they will be unable to meet full Pu-238 production goals if positions in ATR, which are already over-utilized, are not available for Pu-238 isotope production and that they do not have a plan to address this challenge. These and other challenges identified in our September 2017 report may place DOE’s RPS and Pu-238 production goals at risk, in part, because of the short-term and incremental segmented management approach DOE had used to manage the Supply Project since its inception in 2011 through early 2017. In March 2017, DOE officials we interviewed said that the agency anticipated moving to a constant GPHS production rate approach to help provide funding flexibility and stabilize RPS production staffing levels between NASA missions. In June 2017, DOE officials we interviewed said that implementing a constant GPHS production rate approach would also address other previously identified challenges associated with RPS production and the Supply Project and therefore decided to discontinue its short-term and incremental segmented management approach. However, DOE officials we interviewed did not describe how the new constant GPHS production rate approach would help them address some of the longer-term challenges previously identified by the agency, such as scaling up and perfecting chemical processing. We found that DOE has yet to develop an implementation plan for the new approach, with defined tasks and milestones, that can be used to show progress toward assessing challenges, demonstrate how risks are being addressed, or assist in making adjustments to its efforts when necessary. Our previous work has shown that without defined tasks and milestones, it is difficult for agencies to set priorities, use resources efficiently, and monitor progress toward achieving program objectives. In our September 2017 report, we recommended that DOE develop a plan that outlined interim steps and milestones that would allow the agency to monitor and assess the implementation of its new approach for managing Pu-238 and RPS production. DOE agreed with our recommendation and noted it was in the process of implementing an approach for the RPS supply chain that was more responsive to NASA’s needs, among other things. DOE also noted that it was developing an integrated program plan to implement and document the agency’s new approach and expected this to be completed in September 2018. We believe that the development of an integrated program plan is an important step and that any such plan should include defined tasks and milestones, so that DOE can demonstrate progress toward achieving its RPS supply chain goals. In addition, in our September 2017 report we identified another factor that could undermine DOE’s ability to inform NASA about previously identified challenges to reach its full Pu-238 production goal. We found that DOE does not maintain a comprehensive system for tracking RPS production risks and, instead, relies on individual laboratories to track and manage risks specific to their laboratories. Standards for Internal Control in the Federal Government call for agency management to identify, analyze, and respond to risks related to achieving defined objectives. We recommended that DOE develop a more comprehensive system to track systemic risks, beyond the specific technical risks identified by individual laboratories. Doing so would better position DOE to assess the long-term effects of the challenges associated with its Pu-238 and RPS production objectives. DOE agreed with our recommendation and stated that the agency would include steps to ensure that its risk assessment system would include comprehensive programmatic risks. Finally, in our September 2017 report we found that DOE’s new approach to managing RPS and Pu-238 production does not allow for DOE to adequately communicate long-term challenges to NASA. It is also unclear how DOE will use its new management approach to communicate to NASA challenges related to Pu-238 production. As a result, NASA may not have adequate information to plan for future missions using RPS. Standards for Internal Control in the Federal Government call for agency management to use quality information to achieve agency objectives and communicate quality information externally through reporting lines so that external parties can help achieve agency objectives and address related risks. In our September 2017 report, we recommended that DOE assess the long-term effects that known challenges may have on Pu-238 production quantities, time frames, and required funding, and communicate these potential effects to NASA. DOE stated that it agreed with our recommendation and would work with NASA to identify, assess, and develop plans to address known challenges. DOE also stated that the agency expected to complete this effort in September 2019. Chairman Babin, Ranking Member Bera, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have at this time. GAO Contact and Staff Acknowledgments If you or your staff have any questions about this statement, please contact Shelby Oakley at (202) 512-3841 or OakleyS@gao.gov. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to the report on which this testimony is based are Jonathan Gill (Assistant Director); Samuel Blake, Kevin Bray, John Delicath, Jennifer Echard, Cindy Gilbert, Timothy Guinane, John Hocker, Michael Kaeser, Jason Lee, Tim Persons, Danny Royer, Aaron Shiffrin, Kiki Theodoropoulos, Kristin VanWychen, and John Warren. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study This testimony summarizes the information contained in GAO's September 2017 report, entitled Space Exploration: DOE Could Improve Planning and Communication Related to Plutonium-238 and Radioisotope Power Systems Production Challenges ( GAO-17-673 ). What GAO Found The National Aeronautics and Space Administration (NASA) selects radioisotope power systems (RPS) for missions primarily based on the agency's scientific objectives and mission destinations. Prior to the establishment of the Department of Energy's (DOE) Supply Project in fiscal year 2011 to produce new plutonium-238 (Pu-238), NASA officials said that Pu-238 supply was a limiting factor in selecting RPS-powered missions. After the initiation of the Supply Project, however, NASA officials GAO interviewed said that missions are selected independently of decisions on how to power them. Once a mission is selected, NASA considers power sources early in its mission review process. Multiple factors could affect NASA's demand for RPS and Pu-238. For example, high costs associated with RPS and missions can affect the demand for RPS because, according to officials, NASA's budget can only support one RPS mission about every 4 years. Expected technological advances in RPS efficiency could reduce NASA's demand for RPS and Pu-238. DOE has made progress in reestablishing Pu-238 production to meet NASA's future demand to fuel RPS and has identified challenges to meeting its production goals. Specifically, since the start of the Supply Project, DOE has produced 100 grams of Pu-238 and expects to finalize production processes and produce interim quantities by 2019. However, DOE has also identified several challenges to meeting the Supply Project goal of producing 1.5 kilograms (kg) of new Pu-238 per year by 2026. DOE officials GAO interviewed said that DOE has not perfected the chemical processing required to extract new Pu-238 from irradiated targets to meet production goals. These officials also said that achieving the Pu-238 production goal is contingent on the use of two reactors, but only one reactor is currently qualified for Pu-238 production while the second reactor awaits scheduled maintenance. Moreover, while DOE has adopted a new approach for managing the Supply Project and RPS production—based on a constant production approach—the agency has not developed an implementation plan that identifies milestones and interim steps that can be used to demonstrate progress in meeting production goals and addressing previously identified challenges. GAO's prior work shows that plans that include milestones and interim steps help an agency to set priorities, use resources efficiently, and monitor progress in achieving agency goals. By developing a plan with milestones and interim steps for DOE's approach to managing Pu-238 and RPS production, DOE can show progress in implementing its approach and make adjustments when necessary. Lastly, DOE's new approach to managing the Supply Project does not improve its ability to assess the potential long-term effects of challenges DOE identified, such as chemical processing and reactor availability, or to communicate these effects to NASA. For example, DOE officials did not explain how the new approach would help assess the long-term effects of challenges, such as those related to chemical processing. Under Standards for Internal Control in the Federal Government , agencies should use quality information to achieve objectives and to communicate externally, so that external parties can help achieve agency objectives. Without the ability to assess the long-term effects of known challenges and communicate those effects to NASA, DOE may be jeopardizing NASA's ability to use RPS as a power source for future missions.
gao_GAO-18-602
gao_GAO-18-602_0
Background Postal retiree health benefits are provided as part of the Federal Employees Health Benefits Program (FEHBP). FEHBP covers federal employees and retirees, including postal and nonpostal retirees, who receive health insurance from companies that contract with OPM. Retiree participation is voluntary; in fiscal year 2018, about 500,000 postal retirees have participated in FEHBP. Funding requirements for postal retiree health benefits are established by law, which divides responsibility among USPS, the federal government, and postal retirees. USPS is responsible for a specific percentage of premiums, the federal government is responsible for paying a prorated share, and retirees are responsible for the rest. The funding requirements for these benefits changed in 2006. Before then, a “pay-as- you-go” system governed USPS’s payments, which required USPS to pay its share of premiums for current postal retirees. The 2006 Postal Accountability and Enhancement Act (PAEA) required USPS to start fully “prefunding” retiree health benefits. This meant that USPS was required to make annual prefunding payments to a newly established fund to build up funds to cover USPS’s share of future postal retiree health benefit costs. PAEA also established the RHB Fund as a new fund in the U.S. Treasury for USPS to deposit money into, and specified that beginning in fiscal year 2017, the fund would be used by OPM to pay USPS’s share of postal retiree premiums for health benefits. Under PAEA, the first 10 years of prefunding payments were fixed—ranging from $5.4 billion to $5.8 billion annually from fiscal years 2007 to 2016. From fiscal years 2007 through 2016, USPS was also required to continue “pay-as-you-go” payments for its share of premiums for current retirees. The permanent schedule for USPS payments to prefund postal retiree health benefits under PAEA started in fiscal year 2017. We have reported that USPS’s financial condition continues to deteriorate and its outlook is bleak. We have separately issued reports and testimonies that examined USPS’s financial condition, including its liabilities, and identified strategies and options for USPS and Congress to reduce postal costs, generate revenue, and restructure the funding of USPS’s pension and retiree health benefits. Looking forward, we have reported that USPS is facing unsustainable financial challenges as First- Class Mail volume continues to decline. USPS has recently reported that its revenue generation options are constrained, including by the price cap on market-dominant mail, and that any cost-cutting opportunities within its control are “relatively limited and dwindling.” USPS stated that the opportunity for further cost savings within its control will not come close to filling its financial gap. With respect to actions taken by companies and state governments, we have previously reported on the long-term trend for these organizations to eliminate or reduce retiree health benefits. Factors contributing to this decline include financial challenges for companies and states, current and expected retiree health benefit costs, and the legal ability to change retiree health benefit programs. The Financial Outlook of the Postal Service Retiree Health Benefits Fund Is Poor The RHB Fund is on an unsustainable path and is projected to be depleted in 12 years under the status quo. USPS has missed approximately $38 billion in payments to the fund since fiscal year 2010, and the fund’s balance is declining. Beginning in fiscal year 2017, OPM started drawing from the fund to pay USPS’s share of premiums for postal retirees’ health benefits. OPM’s payments in that year exceeded the fund’s income from interest, and OPM projects that, based on the status quo, future payments will continue to exceed the fund’s income from interest. As long as USPS continues to miss its annual payments—which were nearly $4.3 billion in fiscal year 2017 and are $4.5 billion in fiscal year 2018—the fund is on track to be depleted in fiscal year 2030 based on OPM projections requested by us (see fig. 1). We reported similar results in our December 2012 report on postal retiree health benefits. At our request, OPM conducted a sensitivity analysis in which alternative projections were made that assumed USPS made payments to the fund of $1 billion per year or $2 billion per year; these alternative projections extended the fund’s projected depletion date from fiscal year 2030 to fiscal years 2032 or 2035, respectively (see fig. 2). OPM estimates the number of postal retirees eligible for federal retiree health benefits will remain near the current level of 500,000 through fiscal year 2035. The outlook for the RHB Fund is poor as USPS has inadequate resources to cover its required payments to the RHB Fund and, in our view, based on past practices and USPS statements, appears unlikely to make partial payments. USPS has repeatedly testified that its required payments to the RHB Fund are “unaffordable” relative to its current financial situation and outlook. In this regard, USPS accumulated net losses of more than $65 billion in the last 11 years and has budgeted for a net loss of about $5 billion in fiscal year 2018. Further, USPS reached its statutory borrowing limit of $15 billion in 2012. Although USPS accumulated liquid assets (cash and cash equivalents) of about $10.5 billion at the end of fiscal year 2017, it did not make $6.9 billion in required payments for retiree health and pension benefits. According to USPS officials, USPS did not make these payments in order to preserve liquidity and cover operational costs. If the RHB Fund is depleted, PAEA requires USPS to fill the resulting financial gap by resuming “pay-as-you-go” payments for its share of retiree health premiums that are currently being paid by the fund. However, PAEA does not address how funding will be provided or whether benefits will be provided if the fund becomes depleted and USPS does not make payments to cover its share of premiums. OPM and USPS have identified the following issues should the fund be depleted: According to OPM: (1) The RHB Fund is the initial funding source for USPS’s share of postal retirees’ health insurance premiums as long as money remains in the fund. (2) If the fund is depleted, then USPS becomes the funding source responsible for paying USPS’s share of these premiums. (3) Regardless of whether funds are available to pay USPS’s share of premiums, postal retirees are statutorily entitled to remain enrolled in their FEHBP plans. (4) Therefore, if the fund is depleted and USPS does not pay its share of premiums, the providers of these FEHBP plans would be underpaid. According to USPS: (1) Current law does not appear to contemplate a situation in which USPS itself is unable to make payments to the RHB Fund after the fund is depleted. (2) The law does not condition postal retirees’ eligibility for health benefits upon the fund or the payment of government contributions by USPS and the federal government. (3) Therefore, USPS stated it is reasonable to expect that postal retirees would remain eligible for health coverage even if USPS is unable to make payments to the RHB Fund after it is depleted. Regarding who would pay for their health coverage at this point, USPS stated that ultimately, it would be up to Congress to legislate a resolution to the funding issue. As the above projections show, the RHB Fund could be depleted in as little as 12 years—and USPS may be unable to cover its share of retiree health insurance premiums should its financial condition remain precarious. Depletion of the fund could affect postal retirees—who have provided a vital service to the nation—as well as USPS, postal customers and other stakeholders, including the federal government. Many Companies and State Governments Have Cut Retiree Health Benefits to Control Costs A Small and Decreasing Percentage of Companies Continue to Offer Retiree Health Benefits Survey data we reviewed indicate that most companies do not offer retiree health benefits and that the number of companies providing such benefits is decreasing over time. For example, the percentage of all private and public organizations (e.g., state or local governments) with more than 200 employees that offer employee health benefits and that also offer retiree health benefits is estimated to have declined from 40 percent in 1999 to 25 percent in 2017, according to annual surveys conducted by the Henry J. Kaiser Family Foundation and the Health Research & Educational Trust (Kaiser/HRET). Focusing specifically on the results for private for-profit companies, the 2017 Kaiser/HRET survey estimated that only 11 percent of companies with at least 200 employees that offered health benefits to active employees also offered retiree health benefits in 2017, the smallest percentage since comparable data were measured in 2012. The 2017 Kaiser/HRET survey also estimated that the percentage of companies offering retiree health benefits was greater among companies with at least 5,000 employees (35 percent) than those with 1,000 to 4,999 employees (18 percent) and those with 200 to 999 employees (9 percent) (see fig. 3). Surveys sponsored by the Agency for Healthcare Research and Quality (AHRQ) have estimated similar trends for private sector establishments with at least 1,000 employees and with 100-999 employees. According to the AHRQ surveys, an estimated 25 percent of private sector establishments with at least 1,000 employees offered health insurance coverage to retirees age 65 and older in 2016, down from 41 percent in 2003. For retirees under 65, an estimated 32 percent offered such coverage in 2016, down from 42 percent in 2003 (see fig. 4). Many Companies with Retiree Health Benefits Have Changed Eligibility or Benefit Structures Based on reports we reviewed and experts we interviewed, many companies that have retained their retiree health benefits have done so by making changes to control costs, including tightening eligibility and restructuring benefits. Depending on the company, the changes have applied to new hires, current employees, or retirees. Specific changes have included the following: Tightening eligibility: Some companies have made new employees and/or employees hired after a given date ineligible to receive retiree health benefits, while other companies have increased the minimum age and/or length of service requirements for eligibility, according to reports and experts we interviewed. Restructuring benefits: Many companies have restructured retiree health benefits to reduce the level of the benefit, to shift costs to retirees, and to change how the benefits are provided. For example, some companies have shifted from an approach under which a company pays a percentage of premiums for a selected health benefit plan, to an approach under which a company pays a fixed dollar amount that employees may put toward health care costs. The 2017 Kaiser/HRET survey estimated that 30 percent of private and public organizations with 200 or more employees that offer retiree health benefits provide a fixed dollar amount that the retiree can use to purchase a retiree health plan they choose. Experts on retiree health benefits that we interviewed told us such companies often shift costs to retirees by maintaining defined contributions at the same level over time, even as overall health care costs increase. State Governments Have Also Changed Eligibility or Benefit Structures Based on multiple reports and experts, nearly all state governments continue to offer retiree health benefits to at least some state government retirees but generally have shifted some costs from the state to retirees and/or active employees in various ways. For example, in 2016, the Pew Charitable Trusts and the John D. and Catherine T. MacArthur Foundation reported on the following recent changes at the state level related to eligibility for retiree health benefits, benefit levels, and aspects of how the benefits coordinate with Medicare: Tightening eligibility or limiting benefit levels: Most states varied eligibility for retiree health benefits based on factors such as age and years of service, and varied benefit levels based on factors such as date of hire, date of retirement, or vesting eligibility; some states varied benefit levels based on years of service. Between 2000 and 2015, more than a dozen states changed the minimum age or the number of state service years required for retirees to be eligible for health benefits. During that timeframe, at least 10 states adopted formulas for prorating benefits that required different premium-sharing amounts based on years of service, or altered existing prorating formulas, bringing the total to 31 states that used prorating in 2015. At least 5 states stopped making any contributions to health premiums for certain retirees. Medicare coordination: Thirty-five states provided employer- sponsored Medicare Advantage or Medicare Part D plans, known as Employer Group Waiver Plans, to provide health or prescription drug benefit coverage for Medicare-eligible retirees since these options were authorized in 2003. According to the report, “These cost- saving programs provide states with financial subsidies from the federal Medicare program to provide Medicare plus wraparound benefits.” Various Policy Approaches to Address the Sustainability of Postal Retiree Health Benefits Could Have Wide-Ranging Effects We identified eight potential policy approaches to address the financial sustainability of postal retiree health benefits, primarily based on a review of legislative proposals and pertinent literature on actions that were taken by private companies and state governments and are discussed above. These approaches fall into three categories: (1) approaches that shift costs to the federal government; (2) approaches that reduce benefits or increase costs to postal retirees and/or postal employees; and (3) approaches that change how the benefits are financed. These eight approaches are not mutually exclusive, nor are they an exhaustive list of possible approaches. Each approach could include a range of specific options; thus, even if successfully implemented, no one approach would necessarily be sufficient to make postal retiree health benefits financially sustainable. Although our discussion of the various policy approaches specifically addresses postal retiree health benefits, most approaches could address federal retiree health benefits more broadly, as both postal and non-postal federal employees participate in the same federal health benefits program. All approaches we identified have different potential effects and would require congressional action because current law establishes certain requirements for postal retiree health benefit plans, including basic rules for benefits, enrollment, and participation, and how benefits are to be paid for. Because the RHB Fund has a large and growing financial gap, any approach that would have a significant financial impact could affect the federal government, postal retirees, postal employees, USPS, and customers to varying degrees. Some Approaches Would Shift Costs to the Federal Government Medicare Integration: Various legislative proposals have been made to increase postal retirees’ participation in Medicare—a shift that would decrease USPS’s costs but increase Medicare’s costs, according to analyses by the Congressional Budget Office (CBO). These proposals would establish a program within FEHBP for active postal employees and postal retirees. Under these bills, Medicare-eligible postal retirees enrolled in this program would generally also be required to be enrolled in Medicare Parts A, B, and D. According to CBO analyses, the bills would have resulted in USPS savings, in part because increased participation in Medicare would shift primary responsibility for covering certain health care services to Medicare for those who enroll. As we have previously reported, the primary policy decision for Congress to make is whether to increase postal retirees’ use of Medicare. Supplemental federal appropriations: If the RHB Fund becomes depleted and USPS does not fill the financial gap, supplemental federal appropriations could be an alternative if Congress wants benefits to continue at the same level. As previously noted, OPM officials told us that regardless of whether funds are available to pay USPS’s share of premiums, postal retirees are statutorily entitled to remain enrolled in their FEHBP plans. However, supplemental federal appropriations for postal retiree health benefits could increase the federal budget deficit. In addition, supplemental appropriations for postal retiree health benefits would be inconsistent with USPS functioning as a self-financing entity that covers its costs with revenue it generates. Some Approaches Would Reduce Benefits or Increase Costs to Postal Retirees and/or Employees Tighten eligibility or reduce or eliminate retiree health benefits: As some companies and state governments have done, eligibility restrictions could be tightened for postal retiree health benefits, or other actions could reduce the level of benefits or even eliminate benefits, such as making new hires ineligible to receive retiree health benefits. The effects would depend on the specific changes and whether they were made to apply to current retirees, current employees, or future hires. Depending on the extent of the changes, this approach would reduce USPS’s liability for postal retiree health benefits and thereby reduce its unfunded liability. Increase premium payments by postal retirees and/or postal employees: As some companies and state governments have done, premium payments for postal retiree health benefits by postal retirees and/or postal employees could be increased. For example, as others have reported, some companies and state governments have required retirees to pay 100 percent of the health insurance premium for their retiree health benefits. Similarly, a larger share of retiree health premiums could be borne by postal retirees or postal employees could be required to pay for retiree health benefits before they retire. Such changes would require changes to current law that allocates specific financial responsibility for payments among USPS, the federal government, and retirees participating in FEHBP, as active postal employees make no payment for retiree health benefits under current law. The expenses of the RHB Fund could be decreased by these approaches that shift costs to postal retirees, postal employees, or both. Depending on how much of the costs are shifted, the additional costs could increase the challenge for retirees to ensure their accumulated resources last throughout retirement, or for postal employees to save for retirement. Further, as we have reported, rising health care costs can increase the overall amount individuals may need to save to ensure they have an adequate income once they retire. Change the federal contribution to a fixed subsidy: As some companies and state governments have done, postal retiree health benefits could be shifted to a structure with a fixed amount subsidizing the benefit. This amount could be adjusted over time; any adjustments might or might not keep up with costs. Depending on the initial size of the fixed subsidy and any adjustments over time, this approach could reduce the expenses of the RHB Fund and USPS’s required payments. RHB Fund expenses could be reduced over time if the fixed subsidy increases less than postal retiree health premiums. This approach would require changes to current law and regulations that prescribe the federal government’s financial contribution to FEHBP. For example, CBO recently identified one option to change FEHBP’s statutory structure from a premium-sharing structure that is required by law to fixed subsidies for health benefits. Under this option, the fixed subsidies would grow at the rate of inflation rather than at the average rate of growth for FEHBP premiums; CBO stated this change would be expected to slow the growth of federal contributions to FEHBP. A fixed subsidy for retiree health benefits could increase incentives for retirees to make less costly decisions with respect to health care. However, this approach could result in greater cost exposure for retirees, who may face difficult decisions regarding their health care, particularly if their financial resources are limited. As we have reported, individuals face the risk that rising and unpredictable health care or long-term care costs may lead them to draw down their retirement savings faster than expected. Establish a non-federal voluntary employees’ beneficiary association (VEBA) for postal retiree health benefits: As some companies have done to provide retiree health benefits separately from the employer, a VEBA outside the federal government could be established to manage postal retiree health benefits. This approach means that postal retiree health benefits would be provided through the VEBA instead of through the OPM-administered FEHBP. The non-federal VEBA would administer the postal retiree health benefits program, including determining the specific benefits that would be provided and the level of contributions from the VEBA members—who could include retirees and employees—and the investing of its assets. Such an approach would require determining the VEBA’s governance structure, funding sources, level of funding, type of investments, and associated market risks. One issue could be determining the source and level of initial funding for a new VEBA for postal retiree health benefits, such as whether initial funding would come from the RHB Fund, the Treasury, or both. Other issues could be what funds would be provided to the VEBA going forward, including the source(s) and level of funding, and what the benefit levels would be. If the entire RHB Fund were transferred into a VEBA, the current level of benefits would ultimately not be sustainable unless further funding is provided from one or more sources, such as from USPS, retirees, active employees, or the federal government. Thus, trade-offs would involve what level of benefits would be provided, who would bear the costs, and what might happen if VEBA assets decline or become depleted. Some Approaches Would Change How Benefits Are Financed Reduce the required level of prefunding: Proposed legislation includes an 80 percent funding target for postal retiree health benefits instead of the 100 percent target established by current law. This would reduce USPS’s required payments to the RHB Fund but could increase costs for future postal ratepayers and increase the risk that USPS may not be able to pay for these costs. As previously discussed in this report, state governments either do not prefund their retiree health benefits or generally have a low level of prefunding. We have expressed concern about a proposed 80 percent funding target for postal retiree health benefits that would have the effect of carrying a permanent unfunded liability equal to roughly 20 percent of USPS’s liability, which could be a significant amount. As we previously reported, an alternative could be to build in a schedule to achieve 100 percent funding in a later time period after the 80 percent level is achieved. Although USPS payments with an 80 percent funding target would reduce USPS’s required payments, fully funded benefits protect against an inability to make payments later, make promised benefits less vulnerable to cuts, and protect USPS’s long-term viability. Further, reducing the funding target is unlikely to have any effect as long as USPS continues to make no payments to the RHB Fund, as discussed earlier. We continue to believe that as long as USPS is required by law to pay its share of retiree health benefits premiums, it is important for USPS to prefund its retiree health benefit liability to the maximum extent that its finances permit. We recognize that multiple options exist to prefund benefits and amortize unfunded liability and that no prefunding approach will be viable unless USPS can make the payments and maintain liquidity. As we have reported, making affordable prefunding payments would protect the viability of USPS by not saddling it with bills later on, when employees are already retired and no longer helping it to generate revenue; making payments can also make the promised benefits more secure. We also have reported that deferring payments can pass costs from current to future postal ratepayers. To the extent prefunding is postponed by using a lower funding target, larger payments will be required later, when they likely would be supported by lower levels of profitable First-Class Mail volume. Outside investment: Proposed legislation would initially require 25 percent of the RHB Fund to be invested in index funds modeled after those used for federal Thrift Savings Plan investments. The objective of investing RHB Fund assets outside of U.S. Treasury securities would be to seek a greater rate of return on these assets in an attempt to reduce unfunded liabilities and the amount of required prefunding payments. Such outside investment would require legislation because current law limits RHB Fund assets to U.S. Treasury securities that are backed by the full faith and credit of the federal government. A higher rate of return on RHB Fund assets could reduce long-term funding needs. However, there are other considerations. For example, we have reported that if fund assets were invested in non-Treasury securities, the fund may experience losses in a market downturn and would thus have reduced assets available for health care. Assuming there would be no explicit federal guarantee of the value of the invested assets, we stated that USPS is not well positioned to deal with a potentially significant decline in their value, given its significant operating losses and continuing decline in mail volume. We also reported that the impact of any asset losses could be magnified because a market downturn that negatively affects asset value could be associated with a more general economic downturn that negatively affects USPS mail volume and revenues. Conclusions About a half million postal retirees receive retiree health benefits. Postal retirees have provided a vital service to the nation, and resolving a key aspect of their future situation warrants congressional action. Failure to address the poor financial outlook of the RHB Fund could pose serious consequences for these retirees as well as USPS, postal customers, and other stakeholders, including the federal government. It is reasonable to believe that USPS will not be able to fill the financial gap once the RHB fund is depleted—a situation that could occur in as little as 12 years under the status quo. There is no certainty on what actions should be taken to address this problem. However, we have identified multiple approaches that could be used, individually or in combination, that Congress could consider to help address the financial shortfall in this area. All of these approaches have different potential effects, and it is up to Congress to consider the merits of the approaches and determine the most appropriate action to take. It would be preferable to take action when careful consideration is possible, rather than wait until lack of adequate funding could disrupt postal retiree health benefits. Matter for Congressional Consideration Congress should consider passing legislation to put postal retiree health benefits on a more sustainable financial footing. Agency Comments and Our Evaluation We provided a draft of this report to OPM and USPS for their review and comment. OPM provided technical comments, which we incorporated as appropriate. USPS provided a written response, which is reproduced in appendix II of this report. In its written response, USPS stated that it concurred with our matter for congressional consideration that congressional action is necessary to achieve a financially sustainable Postal Service Retiree Health Benefits Fund (RHB Fund). However, USPS said our discussion of potential policy approaches for postal retiree health benefits would benefit from additional context and balance. USPS also put forth additional information for three of the potential policy approaches highlighted in our report. Our report presents a high-level overview of eight potential policy approaches. It was not designed to be a comprehensive catalog of possible options with an analysis of the various considerations relevant to each. With regard to the Medicare integration approach, USPS stated that increased Medicare participation by postal retirees is not limited to the “full Medicare integration option,” as represented in our report and identified variations of such an approach. USPS said readers would benefit from a fuller picture of Medicare integration practices, stating that among employers that continue to provide retiree health benefits, full Medicare integration is a uniform best practice. USPS cited a 2014 report that said Medicare integration is the most common arrangement for employer-provided retiree health benefits, adding that retiree health benefits for Medicare-eligible employees are assumed to be merely supplemental to Medicare as a matter of course. Our report discussed Medicare integration by state governments, but did not present recent data on the percentage of private companies that coordinate their retiree health benefits with Medicare because such data are not publicly available. Additionally, USPS said our report framed the issue of Medicare integration as “solely” a tradeoff between USPS and Medicare costs while there are other factors to consider, such as the relative benefits to USPS compared to the overall cost for the Medicare program. As we noted in our report, the eight potential policy approaches were not designed to be mutually exclusive, nor an exhaustive list of possible approaches. Additionally, we recognize there are various factors related to this approach, but that the primary one is whether to increase postal retirees’ use of Medicare which would lead to further increasing Medicare costs. Second, USPS said it believed our statements about approaches for changing the level of prefunding for retiree health benefits below the 100 percent level were misplaced, citing “universally accepted practices” for other entities to “pay-as-you-go” (i.e., not prefund at all), or to prefund at much lower levels. We have reported on such funding levels in the past as well. However, a proposed 80 percent funding target for postal retiree health benefits would have the effect of carrying a permanent unfunded liability equal to roughly 20 percent of USPS’s liability, which could be a significant amount. As we previously reported, an alternative could be to build in a schedule to achieve 100 percent funding in a later time period after the 80 percent level is achieved. As our report also explained, although USPS payments with an 80 percent funding target would reduce USPS’s required payments, fully funded benefits protect against an inability to make payments later, make promised benefits less vulnerable to cuts, and protect USPS’s long-term viability. Finally, USPS said that our statements about potential risks associated with investment of assets outside the U.S. Treasury seem disproportionate given USPS’s view that diversification of assets set aside for retiree health benefits is “universally accepted” as a best practice. We recognize that a higher rate of return on RHB Fund assets could reduce long-term funding needs for the RHB Fund. However, there are considerations specific to USPS. For example, assuming there would be no explicit federal guarantee of the value of the invested assets, we stated that USPS is not well positioned to deal with a potentially significant decline in their value, given its significant operating losses and continuing decline in mail volume. We also noted that, as we have previously reported, the impact of any asset losses could be magnified because a market downturn that negatively affects asset value could be associated with a more general economic downturn that also negatively affects USPS mail volume and revenues. In summary, we believe our report presents a balanced description of a wide range of possible policy options; it does not endorse or recommend any particular option for Congress. As we concluded, all of these approaches have different potential effects, and the information we present, as well as the additional views presented by USPS, provide critical information for congressional decision-makers to assess as they consider the merits of the approaches and determine the most appropriate action to take. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees; the Postmaster General; and the Director of the Office of Personnel Management. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-2834 or rectanusl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to this report are listed in appendix III. Appendix I: Postal Retiree Health Benefits Trend Data End of year net funded status (unfunded) (55.0) Missed USPS payments to the fund (53.5) (52.0) (48.6) (46.2) (47.8) (48.3) (48.9) (54.8) (52.1) 2017 Total payments due on Sept. 30, 2017, of $955 million for the amortization of USPS’s unfunded liability for postal retiree health benefits, and $3.3 billion for the “normal costs” of retiree health benefits. The “normal cost” is the annual expected growth in liability attributable to an additional year of employees’ service. Fiscal Year Appendix II: Comments from the U.S. Postal Service Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Derrick Collins (Assistant Director); Kenneth John (Analyst-in-Charge); Amy Abramowitz; Taiyshawna Battle; William Colwell; Swati Deo; John Dicken; Leia Dickerson; William Hadley; James Leonard; Emei Li; Thanh Lu; Sara Ann Moessbauer; Joshua Parr; Malika Rice; Matthew Rosenberg; Amy Rosewarne; Frank Todisco; and Crystal Wesco made key contributions to this report.
Why GAO Did This Study USPS is required to prefund its share of health benefits costs for its retirees. To do so, USPS is required to make payments into the RHB Fund, which is administered by OPM. However, USPS has not made any payments to the fund since fiscal year 2010. At the end of fiscal year 2017, USPS had missed $38.2 billion in payments, leaving the fund 44 percent funded. Pursuant to law, beginning in fiscal year 2017, OPM started drawing from the fund to cover USPS's share of postal retirees' health benefits premiums. GAO was asked to review issues related to the sustainability of the RHB Fund. This report examines (1) the financial outlook for the RHB Fund and (2) policy approaches for postal retiree health benefits, among other topics. GAO evaluated financial projections for the RHB Fund from OPM. GAO reviewed laws and regulations and identified policy approaches primarily by identifying legislative proposals, and literature on actions of companies and state governments to address retiree health benefits. These approaches are not exhaustive or mutually exclusive. GAO also interviewed experts in retiree health benefits and postal stakeholders, chosen on the basis of relevant publications and prior GAO work, and interviewed and obtained written responses from OPM and USPS officials. What GAO Found The financial outlook of the Postal Service Retiree Health Benefits Fund (RHB Fund) is poor. At the end of fiscal year 2017, the fund's assets declined to $49.8 billion and unfunded liabilities rose to $62.2 billion. Based on Office of Personnel Management (OPM) projections requested by GAO, the fund is on track to be depleted in fiscal year 2030 if the United States Postal Service (USPS) continues to make no payments into the fund. Annual payments of $1 billion or $2 billion into the fund would extend the projected depletion date by 2 to 5 years (see figure). USPS has said that its required payments to the fund are unaffordable relative to its current financial situation and outlook. For the past 11 years USPS has incurred large operating losses that it expects will continue. Additionally, USPS has stated that its opportunities for revenue generation and cost-cutting are limited. USPS reported that it did not make required fund payments in 2017 in order to preserve liquidity and cover operational costs. If the fund becomes depleted, USPS would be required by law to make the payments necessary to cover its share of health benefits premiums for current postal retirees. Current law does not address what would happen if the fund becomes depleted and USPS does not make payments to cover those premiums. Depletion of the fund could affect postal retirees as well as USPS, customers, and other stakeholders, including the federal government. About 500,000 postal retirees receive health benefits and OPM expects that number to remain about the same through 2035. GAO identified three categories of policy approaches for postal retiree health benefits, based on legislative proposals and pertinent literature. First, some approaches, such as generally requiring eligible postal retirees to participate in Medicare, would shift costs to the federal government. Second, some approaches would reduce benefits or increase costs to postal retirees and/or employees. Third, some approaches would change how benefits are financed (see table). All of these approaches have different potential effects and would require congressional action. Thus, it is up to Congress to consider the merits of different approaches and determine the most appropriate action to take. It would be preferable to take action when careful consideration is possible, rather than wait until lack of adequate funding could disrupt postal retiree health benefits. What GAO Recommends Congress should consider passing legislation to put postal retiree health benefits on a more sustainable financial footing. USPS agreed that congressional action is needed and offered views on some policy approaches discussed in this report.
gao_GAO-18-29
gao_GAO-18-29_0
Background Personnel Security Clearances Personnel security clearances are required for access to certain national security information. National security information may be classified at one of three levels: confidential, secret, or top secret. The level of classification denotes the degree of protection required for information and the amount of damage that unauthorized disclosure could reasonably be expected to cause to national security. Specifically, unauthorized disclosure could reasonably be expected to cause (1) “damage,” in the case of confidential information; (2) “serious damage,” in the case of secret information; and (3) “exceptionally grave damage,” in the case of top secret information. As part of the security clearance process, individuals granted security clearances are investigated periodically—for as long as they remain in a position requiring access to classified information—to ensure their continued eligibility. As of October 1, 2015, the latest date for which data are available, approximately 4.2 million government and contractor employees, at nearly 80 executive branch agencies, were eligible to hold a security clearance. IRTPA, Executive Orders, and Recent Legislation IRTPA. As noted earlier, IRTPA initiated a reform effort that includes goals and requirements for improving the personnel security clearance process government-wide. For example, IRTPA established specific objectives for the timeliness of security clearance processing. It also required that all security clearance background investigations and determinations completed by an authorized investigative agency or authorized adjudicative agency be accepted by all agencies (known as reciprocity), subject to certain exceptions. Appendix II provides additional details on IRTPA as it relates to personnel security clearances. Relevant Executive Orders. The personnel security clearance process and reform efforts are governed by various executive orders. Key executive orders affecting personnel security clearance reform include Executive Orders 12968, 13467, 13741, and 13764, which, among other things, provide definitions, processes, responsibilities, and authorities related to eligibility for access to classified information, suitability and fitness for government employment, and security clearance reform. Aspects of the reform effort covered by the Executive Orders include the establishment of the PAC and NBIB, the transfer of IT responsibilities to DOD, the definition of continuous evaluation, and the addition and amendment of certain roles and responsibilities. Recent legislation. Section 951 of the National Defense Authorization Act for Fiscal Year 2017 requires, among other things, the Secretary of Defense to develop an implementation plan for the Defense Security Service to conduct background investigations for certain DOD personnel—presently conducted by OPM—after October 1, 2017. The Secretary of Defense was to submit the plan to the congressional defense committees by August 1, 2017. DOD provided the plan to the congressional defense committees on August 25, 2017. Section 951 also requires the Secretary of Defense and the Director of OPM to develop a plan by October 1, 2017, to transfer investigative personnel and contracted resources to DOD in proportion to the workload if the plan for the department to conduct background investigations were implemented. In November 2017, after the conclusion of our audit work, Congress passed a bill for the National Defense Authorization Act for Fiscal Year 2018. The bill includes a provision that, among other things, would authorize DOD to conduct its own background investigations and would require DOD to begin carrying out the implementation plan required by section 951 of the National Defense Authorization Act for Fiscal Year 2017 by October 1, 2020. It would also require the Secretary of Defense, in consultation with the Director of OPM, to provide for a phased transition. Governance Structure for Security Clearance Reform Effort To help guide the personnel security clearance reform effort, in June 2007, the Director of National Intelligence and the Under Secretary of Defense for Intelligence established the Joint Reform Team through a memorandum of agreement to execute joint reform efforts to achieve IRTPA timeliness objectives and improve the processes related to granting security clearances and determining suitability for government employment. The team consisted of cognizant entities within OMB, OPM, ODNI, and DOD. The team worked on improving the security clearance process governmentwide, including providing progress reports on the reform effort, recommendations for research priorities, and oversight of the development and implementation of an information technology strategy, among other things. In June 2008, Executive Order 13467 established the PAC as the government-wide governance structure responsible for driving the implementation of and overseeing security and suitability reform efforts. Its specific responsibilities include ensuring the enterprise-wide alignment of suitability, security, credentialing, and, as appropriate, fitness processes; working with agencies to implement continuous performance improvement programs, policies, and procedures; establishing annual goals and progress metrics; and preparing annual reports on results. In addition, the PAC is to develop and continuously reevaluate and revise outcome-based metrics that measure the quality, efficiency, and effectiveness of the vetting enterprise, among other things. As noted above, the Deputy Director for Management of OMB serves as the Chair of the PAC and has authority, direction, and control over its functions. In addition to the Deputy Director for Management of OMB, the PAC has three additional principal members: the Director of National Intelligence, the Director of OPM, and the Under Secretary of Defense for Intelligence. Director of National Intelligence: The Director of National Intelligence serves as the Security Executive Agent and is responsible for, among other things, developing and issuing uniform and consistent policies and procedures to ensure the effective, efficient, timely, and secure completion of investigations, polygraphs, and adjudications related to determinations of eligibility for access to classified information or eligibility to hold a sensitive position. In this role, the Director of National Intelligence is also to direct the oversight of such investigations, reinvestigations, and adjudications. Director of OPM: The Director of OPM serves as the Suitability and Credentialing Executive Agent and is responsible for, among other things, prescribing suitability standards and minimum standards of fitness for employment. Under Secretary of Defense for Intelligence: The Under Secretary of Defense for Intelligence became the fourth principal member of the PAC with the issuance of Executive Order 13741 in September 2016. Additionally, Executive Order 13467, as amended, assigns DOD responsibility for designing, developing, operating, defending, and continuously updating and modernizing, as necessary, IT systems that support all background investigation processes conducted by NBIB. In addition, in April 2014, the PAC established the Program Management Office to implement personnel security clearance reforms. This office includes subject-matter experts with knowledge of personnel security clearances and suitability determinations from OMB, ODNI, OPM, DOD, the Department of Homeland Security, the Department of Justice, the Department of the Treasury, and the Federal Bureau of Investigation. Prior to the establishment of the Program Management Office, the PAC was supported by the Joint Reform Team as well as various subcommittees that addressed specific tasks, such as investigator and adjudicator training and the development of performance measures. Key Efforts to Reform the Personnel Security Clearance Process Since 2014, there have been a number of key efforts to reform the personnel security clearance process. For example, following the September 2013 shooting at the Washington Navy Yard, the PAC conducted a 120-day interagency review to assess risks inherent in the federal government’s security, suitability, and credentialing processes. The February 2014 report resulting from that review highlighted 37 recommendations to improve, among other things, the federal government’s processes for granting security clearances. Some of the recommendations address longstanding issues of the reform effort—such as improving data sharing between local, state, and federal law enforcement; and others are consistent with previous GAO recommendations—such as reporting measures for the quality of background investigations. The status of the implementation of these recommendations is discussed later in this report. In addition, in March 2014, OMB established Insider Threat and Security Clearance Reform as a government-wide, cross-agency priority goal in part to improve interagency coordination and implementation within the area of personnel security clearances. Through this goal, the PAC and executive-branch agencies are to work to improve oversight to ensure that investigations and adjudications meet government-wide quality standards. From the second quarter of fiscal year 2014 to the fourth quarter of fiscal year 2016, the PAC has reported quarterly on, among other things, the status of key milestones and the timeliness of initial investigations and periodic reinvestigations for the executive branch as a whole. As part of the cross-agency priority goal, the PAC identified various sub goals on which to focus its work. The sub goals were originally based on recommendations from the 120-day review and, according to PAC Program Management Office officials, were later updated to reflect the PAC’s strategic plans. The current sub goals are as follows: trusted workforce, modern vetting, secure and mission-capable IT, and continuous process improvement. Further, in 2015, in response to the OPM data breach and at the request of the President, the PAC conducted a second review—a 90-day review—of the government’s suitability and security processes. In the January 2016 summary of the review, the administration identified four actions to create a more secure and effective federal background investigations infrastructure. Specifically, it identified the need to: (1) establish NBIB as the new federal entity to strengthen how the government performed background investigations; (2) leverage IT expertise at DOD for processing background investigations and protecting against threats; (3) update governance authorities, roles, and responsibilities; and (4) drive continuous performance improvement to address evolving threats. The status of these actions is discussed later in this report. NBIB’s Use of Contract Investigators to Conduct Background Investigations NBIB maintains an in-house federal investigator workforce, but according to NBIB, as of July 2017, it relied on contract investigators to conduct about 60 percent of the background investigations it provides to customer agencies, such as DOD. In 2011, OPM awarded three indefinite delivery/indefinite quantity contracts to three contractors to conduct investigation fieldwork services—CACI Premier Technology, Inc., KeyPoint Government Solutions, Inc., and U.S. Investigations Services, LLC (USIS). According to NBIB, USIS was responsible for about 65 percent of the contractor workload. In September 2014, OPM decided not to exercise the option for the USIS contract for fiscal year 2015. Eleven months prior, in October 2013, the Department of Justice had announced that the government would intervene in a civil suit against USIS, filed by a former employee under the False Claims Act. The government alleged that the contractor had circumvented contractually required quality reviews of completed background investigations to increase the company’s revenues and profits. In August 2015, the Department of Justice announced that USIS and its parent company had agreed to a $30 million settlement in exchange for a release of liability under the False Claims Act; accordingly, the claims resolved by the settlement agreement were allegations only, and there was no determination of liability. In June 2015, OPM conducted a review of USIS cases and found that the investigations for which USIS did not conduct the quality review were generally less complex cases. In addition, these cases had a lower return rate from OPM reviewers. In September 2016, OPM awarded new indefinite delivery/indefinite quantity contracts for investigation fieldwork services to four companies— CACI Premier Technology, Inc., KeyPoint Government Solutions, Inc., CSRA LLC, and Securitas Critical Infrastructure Services, Inc. The 2-year base period for these contracts runs to the end of fiscal year 2018, and OPM may exercise three 1-year option periods for each contract, with the first beginning on October 1, 2018. Executive Branch Agencies Have Made Progress Reforming the Security Clearance Process, but Long-Standing Key Initiatives Remain Incomplete Executive branch agencies have made progress in reforming the personnel security clearance process by, for example, issuing guidance, such as Quality Assessment Standards to guide background investigations, updated strategic documents to sustain the momentum of the reform effort, and adjudicative guidelines to establish single, common adjudicative criteria for security clearances. However, agencies face challenges in implementing certain aspects of the 2012 Federal Investigative Standards, including full implementation of continuous evaluation, and the issuance of a reciprocity policy remains incomplete. In addition, while the executive branch has taken steps toward establishing performance measures for the quality of government-wide personnel security clearance investigations, there is no milestone for their completion. The PAC Has Made Progress Reforming the Personnel Security Clearance Process The PAC has made progress in reforming the personnel security clearance process, as demonstrated through actions taken in response to recommendations and milestones outlined in four key reform effort documents: (1) the February 2014 120-day review; (2) the 2015 90-day review; (3) the Insider Threat and Security Clearance Reform cross- agency priority goal quarterly progress updates; and (4) the PAC’s strategic framework for fiscal years 2017 through 2021. 120-day review. According to PAC documentation, as of August 2017, the PAC had implemented 73 percent of the 120-day review recommendations. For example, in response to a recommendation from the review, ODNI and OPM jointly issued Quality Assessment Standards in January 2015, which establish federal guidelines for assessing the quality of national security and suitability investigations. The establishment of the standards is intended to facilitate the measurement and continued improvement of investigative quality across the executive branch. In response to another related recommendation, ODNI developed the Quality Assessment Reporting Tool (QART), through which agencies will report on the completeness of investigations. According to ODNI officials, the QART was implemented in October 2016, and full implementation is expected by the end of calendar year 2017. 90-day review. By January 2017, the PAC had taken steps to implement all of the actions identified in the January 2016 summary of the 90-day review. Specifically, Executive Order 13741, issued in September 2016, established NBIB, within OPM, to replace FIS as the primary executive branch service provider for background investigations. It also identified DOD as the entity responsible for designing, developing, operating, and securing IT systems that support NBIB’s background investigations. Additionally, the Executive Order elevated the Under Secretary of Defense for Intelligence to a full principal member of the PAC and directed the PAC to review and update governance, authorities, roles, and responsibilities. Subsequently, Executive Order 13764, issued in January 2017, further clarified relevant authorities, roles, and responsibilities, among other things. Further, according to PAC Program Management Office officials, the PAC has taken steps to implement continuous process improvements, such as developing a research and innovation program through which it has undertaken a number of projects aimed at improving the personnel security clearance process. In addition, the PAC established a continuous performance improvement initiative to develop mechanisms to improve the quality and efficiency of the end-to- end security, suitability, and credentialing vetting processes. As of July 2017, the PAC had identified seven categories of performance measures for the end-to-end security, suitability, and credentialing processes—such as timeliness, volume, and cost-efficiency—which it planned to implement in a phased approach. Cross-agency priority goal. From the second quarter of fiscal year 2014 through the fourth quarter of fiscal year 2016, the PAC reported quarterly on the status of key initiatives, among other things, as part of the Insider Threat and Security Clearance Reform cross-agency priority goal. For each initiative, the PAC reported the milestone due date, the milestone status—on track, complete, at risk, missed, or not started—and the responsible agencies. As of the PAC’s last publicly reported quarterly update, for the fourth quarter of fiscal year 2016, 8 of 33 initiatives were listed as complete. According to PAC Program Management Office officials, they have continued to track the status of these milestones internally, and almost half of the initiatives—16 of 33—were listed as complete as of the third quarter of fiscal year 2017. These initiatives include the establishment of a Federal Background Investigations Liaison Office within NBIB to oversee and resolve issues between federal, state, and local law enforcement entities when collecting criminal history record information for background investigations, and developing plans to implement improved investigator and adjudicator training. Strategic framework. The PAC has issued three documents that serve as its updated strategic framework for the next 5 years. In July 2016, it issued its Strategic Intent for Fiscal Years 2017 through 2021, which identifies the overall vision, goals, and 5-year business direction to achieve an entrusted workforce. In October 2016, it issued an updated PAC Enterprise IT Strategy, which provides the technical direction to provide mission-capable and secure security, suitability, and credentialing IT systems. According to PAC Program Management Office officials, the third document—the PAC Strategic Intent and Enterprise IT Strategy Implementation Plan (Implementation Plan)—was distributed to executive branch agencies in February 2017. The Implementation Plan documents the key initiatives, targets, and measures for achieving the strategic vision. In March 2009, the Joint Reform Team issued an Enterprise IT Strategy, but the PAC’s own February 2014 120-day review found that this strategy stopped short of actions needed to develop enterprise-wide IT capabilities to modernize, integrate, and automate agency capabilities and retire legacy systems. It further stated that absent a strategy for integrated IT capabilities, agencies created disparate tools designed only to meet their specific requirements and recommended the development and execution of an enterprise reform IT strategy to ensure interoperability and improved sharing of relevant information. We compared the PAC’s 2016 Enterprise IT Strategy against leading practices for comprehensive and effective IT strategies and found that it generally aligns with such practices. For example, it contains results-oriented goals and strategies for agencies to achieve desired results, and describes interdependencies within and across projects. In addition to these four key areas, PAC members noted additional progress in reforming the personnel security clearance process. Specifically, ODNI officials highlighted the development of seven Security Executive Agent Directives, five of which have been issued as of August 2017, related to the use of polygraphs and social media in the investigative process, among other things. For example, in December 2016, the Director of National Intelligence issued Security Executive Agent Directive 4, National Security Adjudicative Guidelines. Effective in June 2017, the directive is meant to establish the single, common adjudicative criteria for all covered individuals who require initial or continued eligibility for access to classified information or eligibility to hold a sensitive position. DOD officials stated that having standardized adjudicative criteria such as these guidelines constitutes an important step in helping to ensure reciprocity. Additionally, a senior PAC Program Management Office official noted that the PAC has designated eight executive branch-wide IT shared service capabilities, such as the electronic adjudication of certain background investigations and a new electronic questionnaire for national security positions. According to this official, the latter two shared services are expected to be rolled out in 2017, with the remaining six shared services being rolled out as they become available. Key Aspects of the 2012 Federal Investigative Standards and the Development of a Reciprocity Policy Remain Incomplete While the PAC has reformed many parts of the personnel security clearance process, implementing certain key aspects of the 2012 Federal Investigative Standards, including changing the frequency of periodic reinvestigations for certain clearance holders and establishing a continuous evaluation program, remain incomplete. In addition, the issuance of ODNI’s draft reciprocity policy has been delayed. 2012 Federal Investigative Standards. These standards outline criteria for conducting background investigations to determine eligibility for a security clearance and are intended to ensure cost-effective, timely, and efficient protection of national interests and to facilitate reciprocal recognition of the resulting investigations. In April 2015, we reported that executive branch agencies with responsibilities for security clearances and suitability determinations had twice approved updated Federal Investigative Standards to replace the 1997 Standards, but that progress in implementing the updated standards had been limited. Specifically, as part of the reform effort that began after the passage of IRTPA, the Director of National Intelligence and the Acting Director of OPM, in their roles as Security and Suitability Executive Agents, signed new Federal Investigative Standards on December 13, 2008, and stated that the anticipated initial deployment of the standards was to begin in the third quarter of fiscal year 2009. However, the 2008 Federal Investigative Standards were not implemented, according to ODNI officials, because key terms were not clearly defined and required further clarification. In December 2012, the Director of National Intelligence and Director of OPM approved updated Federal Investigative Standards. Among other things, the 2012 Federal Investigative Standards identify five investigative tiers. According to OPM Federal Investigations Notice 16-02, tier 3 investigations are required for eligibility for access to secret and confidential information, or for noncritical sensitive positions, or “L” access. OPM Federal Investigations Notice 16-07 indicates that tier 5 investigations are required for eligibility for access to top secret or Sensitive Compartmented Information, or for critical sensitive or special sensitive positions, or “Q” access. The updated standards also changed the frequency of periodic reinvestigations for certain clearance holders. The Federal Investigative Standards milestone for full operating capability is the end of fiscal year 2017. Specific details on this topic were omitted because the information is sensitive. See figure 1 for a timeline of efforts made since 1997 to implement updated Federal Investigative Standards. The 2012 standards include continuous evaluation as a new requirement for certain clearance holders. This is a key executive branch initiative to more frequently identify and assess security-relevant information between periodic reinvestigations. Efforts to implement a continuous evaluation program were included in the implementation documents from the prior reform effort following approval of the 2008 Federal Investigative Standards, including an operational milestone for implementing a continuous evaluation program by the fourth quarter of fiscal year 2010. ODNI has adjusted the milestones for implementing the program and issuing a Security Executive Agent Directive for continuous evaluation several times. For example, in April 2015, we reported that ODNI planned to issue a continuous evaluation policy by September 2016 and to implement a continuous evaluation capability for certain clearance holders by December 2016. However, in November 2017 we found that while ODNI has taken an initial step to implement continuous evaluation in a phased approach across the executive branch, it has not yet issued a Security Executive Agent Directive for continuous evaluation or determined when the future phases of implementation will occur. According to ODNI officials, as of August 2017, this directive was undergoing interagency coordination and would be issued upon completion of that process. As of August 2017, continuous evaluation had not yet been fully implemented and ODNI had not set a new milestone for when it would occur. In November 2017, we recommended, among other things, that the Director of National Intelligence issue a continuous evaluation directive and develop an implementation plan. ODNI generally concurred with those recommendations. Figure 2 provides an overview of the adjusted executive branch milestones for issuing a continuous evaluation policy and implementing a continuous evaluation program, including developing a technical capability. Reciprocity policy. In 2004, IRTPA required that all security clearance background investigations and determinations completed by an authorized investigative agency or authorized adjudicative agency be accepted by all agencies, subject to certain exceptions. As reported in a cross-agency priority goal quarterly update in fiscal year 2016, the milestone for ODNI to issue and promulgate an updated national security reciprocity policy was September 2016. Security clearance reciprocity is statutorily required by IRTPA, subject to certain exceptions, and it is currently implemented by executive orders and guidance across executive-branch agencies. To consolidate existing reciprocity guidance, ODNI planned to issue a comprehensive, national-level security clearance reciprocity policy intended to resolve challenges associated with consistent, timely reciprocity processing across the executive branch. However, the issuance date has been postponed multiple times—the original milestone was September 2013—and as of July 2017, ODNI had not yet issued a reciprocity policy or identified a new milestone for its issuance. In July 2017, ODNI officials stated that a draft reciprocity policy was pending entry into the formal interagency coordination process and would be issued upon completion of that process. However, ODNI officials were unable to provide an estimated issuance date because, according to the officials, the length of the interagency coordination process can vary. PAC Program Management Office officials noted that issuance delays are due, in part, to the development of related personnel security policies, including continuous evaluation, with which the reciprocity policy must be aligned. Figure 3 shows milestones for the issuance of the reciprocity policy. In November 2010, we found that although executive-branch agency officials stated that reciprocity is regularly granted, agencies did not have complete records on the extent to which previously granted security clearance investigations and adjudications are honored government- wide. Further, we found that agencies lacked a standard metric for tracking reciprocity. We recommended that the Deputy Director for Management, OMB, in the capacity as chair of the PAC, develop comprehensive metrics to track when reciprocity is granted and report the findings from the expanded tracking to Congress. OMB concurred with our recommendation. However, in April 2015, we found that executive branch agencies still did not consistently track when reciprocity is or is not granted, nor did they have metrics in place to measure how often reciprocity occurs. ODNI officials stated that they planned to develop them by 2016. Although the Director of National Intelligence had requested Intelligence Community elements take steps to begin capturing reciprocity data in December 2014, such baseline data needed to support measures for reciprocity were not being collected government-wide. We recommended, in 2015, that the Director of National Intelligence require the development of baseline data to support measures for reciprocity. These data would help to identify and monitor changes in reciprocity government-wide. ODNI did not state whether it concurred with the recommendation, and as of November 2017, it had not been implemented. PAC officials stated that the greatest challenge of the reform effort is the breadth and complexity of the issues it is trying to resolve, noting that the reform effort involves nearly every executive branch agency. In addition, these officials stated that sometimes agencies focus on short-term high- visibility issues instead of longer-term efforts, which are needed for systemic change. ODNI officials also noted the complexities of reforming the personnel security clearance process and working toward a whole-of- government solution. These officials noted that the reform efforts involve coordination among a number of agencies across the executive branch, which is both time and resource intensive. Both PAC Program Management Office and ODNI officials also identified limited agency resources and competing priorities—across executive branch agencies— as additional challenges. The PAC has taken recent steps to help address some of these challenges to continued progress, which could facilitate the completion of the key initiatives discussed above. For example, in its Implementation Plan the PAC has identified approximately 50 initiatives on which it will focus its work over the next 5 fiscal years and has aligned those activities with its four strategic categories of initiatives—trusted workforce, modern vetting, secure and modern mission-capable IT, and continuous performance improvement. However, according to ODNI officials, during their review of a draft of the Implementation Plan, they raised concerns about the number of initiatives and highlighted the need to provide greater prioritization of the initiatives to help better focus efforts. For example, some agencies are assigned as a primary owner of multiple initiatives. Specific details of the number of initiatives to which agencies are assigned were omitted because the information is sensitive. PAC Program Management Officials stated that, to alleviate these concerns, they subsequently identified two to four priority initiatives within each of the four categories to help focus agency efforts. These officials further stated that the PAC intends to update and reissue a condensed version of its Implementation Plan annually so that it can make revisions as issues that affect these priorities, such as reduced budgets, occur. These 11 priority initiatives are identified in the PAC’s Implementation Plan which, according to PAC Program Management Office officials, the PAC finalized and circulated to executive branch agencies in February 2017. For example, establishing a continuous evaluation capability and strengthening and aligning guidelines for the reciprocal recognition of existing vetting decisions are listed among the PAC’s priority initiatives. Given the limited agency resources cited by ODNI and PAC Program Management Office officials and other key competing efforts, such as improving investigation timeliness, the PAC’s prioritization of initiatives could help to refocus efforts on the most critical areas of the reform effort, and could provide agencies with a manageable number of initiatives on which to prioritize their efforts. Executive Branch Has Taken Steps to Establish Government-wide Performance Measures for the Quality of Background Investigations, but It Is Unclear When This Effort Will Be Completed Our prior work on personnel security clearances has identified concerns about the quality of background investigations and has highlighted the need to build quality throughout the process for almost 20 years. Additionally, we found that executive branch reports on the personnel security clearance process contained limited information on quality in the process. In May 2009, we recommended, among other things, that the Deputy Director for Management of OMB, in the capacity as Chair of the PAC, include in an IRTPA-required report to Congress quality metrics to provide more transparency on personnel security clearances. OMB concurred with that recommendation. However, the 2010 report to Congress did not include quality metrics, and the IRTPA reporting requirement expired in 2011. Appendix III provides an overview of our work in this area and of executive branch efforts to establish government- wide performance measures for the quality of background investigations. According to Executive Order 13467, the PAC is to establish annual goals and progress metrics related to security and suitability processes and continuous performance improvement. This focus on performance measures is consistent with our body of work on using results-oriented management tools to help achieve desired program outcomes—derived from our work on how to effectively implement the Government Performance and Results Act (GPRA) and the GPRA Modernization Act of 2010. This body of work provides agencies with a framework for effectively managing program performance to achieve desired outcomes, including establishing performance measures. In addition, Standards for Internal Control in the Federal Government states that management should establish and review performance measures and monitor internal control systems. Further, we found in previous work that interim milestones can be used to show progress toward implementing efforts or to make adjustments when necessary. Developing and using specific milestones and timelines to guide and gauge progress toward achieving an agency’s desired results informs management of the rate of progress toward achieving goals, and whether adjustments need to be made in order to maintain progress within given timeframes. As of July 2017, the executive branch had taken two of three steps to establish government-wide measures for the quality of investigations. First, as previously discussed, ODNI and OPM issued Quality Assessment Standards for background investigations in January 2015 to establish standard criteria for agencies to consistently evaluate complete investigations. The standards were developed through an interagency effort chaired by ODNI, OPM, and DOD. These standards define complete investigations as those in which all required components were obtained in full and any known issues—such as criminal activity—were resolved per the standards. DOD officials highlighted issue resolution— having enough useful information about the circumstances surrounding a given issue to make an adjudicative determination—as a persistent challenge with background investigations for personnel security clearances, and as key to determining investigation quality. Second, ODNI developed the QART, through which agencies will be able to report on the completeness of investigations, to include whether adjudicators considered issues identified during an investigation to have been sufficiently resolved. According to ODNI officials, they began to implement the QART in October 2016, and full implementation is expected by the end of calendar year 2017. ODNI officials stated that they are collecting sufficient data from the QART in order to develop measures for the quality of investigations. In ODNI’s review of a draft of this report, officials stated that it is premature to set a milestone for completing government-wide performance measures for the quality of investigations and that ODNI will set such a milestone when the QART data have been fully analyzed. Specific details on this topic were omitted because the information is sensitive. Figure 4 provides an overview of the timeline for the executive branch’s three-step process to develop measures for the quality of investigations. Although ODNI has developed the QART, and ODNI and OPM have issued the Quality Assessment Standards, there are still challenges to resolve as measures for the quality of investigations are established. For example, DOD officials stated that they do not intend for all of their adjudicators to use the QART, and that they have not developed an interface between their Rapid Assessment of Incomplete Security Evaluations system and the QART. DOD officials also stated that they will continue to use their tool until the QART is automated for use in a new Defense Information System for Security. If DOD investigations— which represent the majority of the background investigations conducted by NBIB—are not captured by the QART, it is unclear how ODNI will have sufficient data to develop government-wide measures for the quality of investigations. Further, NBIB officials noted that if their largest customer is not utilizing the QART, they are not positioned to receive comprehensive feedback. In April 2015 we recommended, among other things, that the Director of National Intelligence, in his capacity as Security Executive Agent, develop, implement, and report to Congress on government-wide, results- oriented performance measures for security clearance background investigation quality. ODNI did not state whether it concurred with that recommendation, and the recommendation has not been implemented. We continue to believe that measures for the quality of background investigations are needed to provide decision-makers, including OMB and Congress, with information on the quality of personnel security clearance background investigations, and to help ensure the quality of investigations. Without establishing a milestone for the completion of government-wide performance measures for the quality of investigations, their completion may be further delayed, and executive branch agencies will not have a schedule against which they can track progress or to which they are accountable. Agencies Meeting Timeliness Objectives for Initial Clearances Decreased Since Fiscal Year 2012; a Government-wide Approach Has Not Been Developed to Improve Timeliness; and Reporting Has Been Limited The Number of Executive Branch Agencies Meeting Established Timeliness Objectives for Investigations and Adjudications for Initial Secret and Top Secret Clearances Decreased from Fiscal Years 2012 through 2016 Executive branch agencies have experienced challenges in meeting timeliness objectives for investigation and adjudication of initial personnel security clearances, and their reporting on timeliness has been limited. The number of executive branch agencies meeting established timeliness objectives for both initial secret and initial top secret clearances decreased from fiscal year 2012 through fiscal year 2016. While ODNI has taken steps to address timeliness challenges, it has not developed a government-wide approach to help agencies improve the timeliness of initial personnel security clearances. In addition, the executive branch’s reporting on timeliness has been limited, which inhibits both transparency and oversight of the personnel security clearance process. Our analysis of timeliness data for specific executive branch agencies showed that the percent of agencies meeting established investigation and adjudication timeliness objectives for initial secret and top secret personnel security clearances decreased from fiscal year 2012 through 2016. Specifically, in fiscal year 2012, 27 percent of the agencies for which we obtained data met investigation and adjudication objectives for at least three of four quarters for initial secret clearances, and 59 percent met those objectives for initial top secret clearances. By fiscal year 2016, that decreased to 2 percent and 10 percent, respectively. IRTPA established an objective for each authorized adjudicative agency to make a determination on at least 90 percent of all applications for a personnel security clearance within an average of 60 days after the date of receipt of the completed application by an authorized investigative agency—not longer than 40 days to complete the investigative phase, and 20 days to complete the adjudicative phase. In assessing timeliness under these objectives, executive branch agencies exclude the slowest 10 percent and report on the average of the remaining 90 percent (referred to as the fastest 90 percent). In 2012, ODNI, in coordination with interagency participation, modified the timeliness goals for certain background investigations and established new timeliness goals. As part of the Insider Threat and Security Clearance Reform cross- agency priority goal, from the second quarter of fiscal year 2014 until the fourth quarter of fiscal year 2016, the PAC reported quarterly on the average number of days to initiate, investigate, adjudicate, and complete the end-to-end process for initial secret and initial top secret cases for the executive branch as a whole. It reported this information as compared with the IRTPA-established timeliness objectives for initial secret clearances and ODNI’s revised timeliness objectives for initial top secret clearances. For fiscal year 2016, the PAC reported that the government- wide average for executive branch agencies: Did not meet the 40-day investigation objective for the fastest 90 percent of initial secret clearances for any quarter. The averages ranged from 92 days to 135 days. Did not meet ODNI’s revised investigation objective for the fastest 90 percent of initial top secret clearances for any quarter. The averages ranged from 168 days to 208 days. With regard to the timeliness of investigations, our analysis of timeliness data reported by specific executive branch agencies showed that the percent of agencies that met timeliness objectives decreased from fiscal year 2012 through 2016. Specifically, our analysis showed: While 27 percent of the agencies met the 40-day IRTPA-established investigation objective for at least three of four quarters for the fastest 90 percent of initial secret cases in fiscal year 2012, only 2 percent met the objective for at least three of four quarters in fiscal year 2016. While 78 percent of the agencies met ODNI’s revised investigation objective for at least three of four quarters for the fastest 90 percent of initial top secret cases in fiscal year 2012, only 12 percent met the objective for at least three of four quarters in fiscal year 2016. Across the agencies we reviewed, the average number of days to complete the investigation phase of the fastest 90 percent of initial top secret cases for the fourth quarter of fiscal year 2016 ranged from 26 days to 459 days. Furthermore, our analysis showed that, for the executive branch agencies included in our review, the time required to investigate initial personnel security clearances increased from fiscal year 2012 through fiscal year 2016, often exceeding the investigation phase objective established by IRTPA. In addition, we found that both agencies with delegated authority to conduct their own investigations and those that used FIS (now NBIB) as their investigative service provider experienced challenges in meeting established investigation timeliness objectives. However, the only agencies that met investigation timeliness objectives for at least three of four quarters of fiscal year 2016—for the fastest 90 percent of initial secret and initial top secret clearances—have delegated authority to conduct their own investigations. The executive branch’s challenges in meeting investigation timeliness objectives for initial personnel security clearances have contributed to a significant backlog of background investigations at the primary entity responsible for background investigations, NBIB. NBIB documentation shows that its backlog of pending investigations increased from about 190,000 in August 2014 to more than 709,000 investigations, as of September 2017. NBIB officials stated that more than 70 percent of the bureau’s pending background investigations had been pending for longer than the established timeliness objectives, as of June 2017. Additional details about NBIB’s investigation backlog and actions the bureau is taking to address it are discussed later in this report. With regard to the timeliness of adjudications, our analysis showed: While 51 percent of the agencies met the 20-day adjudication objective for at least three of four quarters for the fastest 90 percent of initial secret cases in fiscal year 2012, only 35 percent met the objective for at least three of four quarters in fiscal year 2016. While 65 percent of the agencies met the 20-day adjudication objective for at least three of four quarters for the fastest 90 percent of initial top secret cases in fiscal year 2012, only 43 percent met the objective for at least three of four quarters in fiscal year 2016. Across the executive branch agencies included in our review, the average number of days to adjudicate the fastest 90 percent of initial top secret cases for the fourth quarter of fiscal year 2016 ranged from 3 days to 175 days. Table 1 shows the percent of agencies meeting the investigation and adjudication objectives for the fastest 90 percent of initial secret and initial top secret cases for at least three of four quarters from fiscal years 2012 through 2016. In November 2017, we reported that the percent of executive branch agencies meeting established timeliness goals for completing periodic reinvestigations also decreased from fiscal years 2012 through 2016. Appendix IV provides information on executive branch agency periodic reinvestigations from fiscal years 2012 through 2016. ODNI Has Taken Steps to Address Timeliness Challenges but Has Not Developed a Government- wide Approach to Help Improve Timeliness ODNI has taken steps to address challenges in meeting established timeliness objectives, such as revising the timeliness objective for top secret investigations in 2012; however, it has not developed a government-wide approach to help agencies improve the timeliness of initial personnel security clearances. ODNI officials stated that several significant events contributed to agency challenges in meeting timeliness objectives over the past 5 fiscal years, including a government shutdown, the 2015 OPM data breach, a loss of OPM contractor support, and OPM’s review of the security of its IT systems, which resulted in the temporary suspension of the web-based platform used to complete and submit background investigation forms. In addition, executive branch agencies noted the increased investigative requirements stemming from the 2012 Federal Investigative Standards as a further challenge to meeting established timeliness objectives in the future. Standards for Internal Control in the Federal Government states that management evaluates and, if necessary, revises defined objectives so that they are consistent with requirements and expectations. In addition, the standards state that management should use quality information to achieve the entity’s objectives, including relevant data from internal and external sources. As previously discussed, ODNI, in coordination with interagency participation, modified the timeliness goals for certain background investigations and established new timeliness goals. Since then, meeting timeliness objectives has become even more challenging due, for example, to updated investigation standards. However, since 2012, ODNI has not revisited the investigation or adjudication timeliness objectives for secret and top secret clearances. Specifically, ODNI has not conducted an evidence-based review, using relevant data, to ensure that these objectives are appropriate, given changes to the investigative requirements and other stated challenges. In addition, while ODNI and interagency partners modified certain timeliness goals in 2012, the number of executive branch agencies able to consistently meet the revised objectives also decreased over the past 5 fiscal years. Without conducting an evidence-based review of the investigation and adjudication timeliness objectives for both secret and top secret clearances to ensure that they are appropriate, agencies may experience further timeliness challenges and delays in determining eligibility. According to ODNI officials, they are aware of each agency that does not meet timeliness objectives and, in the capacity as Security Executive Agent, the Director of National Intelligence has taken steps to help these agencies improve their timeliness. Specifically, ODNI officials stated that the Director of National Intelligence issues annual agency performance letters to heads of agencies when security clearance timeliness objectives are not met. In the letters, the Director of National Intelligence requests that the agency submit an action plan, within 60 days of the date of the letter, identifying the factors that prevented the agency from meeting established timeliness objectives and the actions the agency will take to remedy those impediments. Officials stated that since the letter comes directly from the Director, this helps to attract the maximum amount of attention possible. In addition to establishing the current timeliness objectives for initial security clearances, IRTPA also established a 5-year timeframe and an interim milestone for the executive branch to implement those objectives. Specifically, the act required the development of a plan to reduce the length of the personnel security clearance process, including the IRTPA- established timeliness objectives described above. The plan was to be developed in consultation with appropriate committees of Congress and each authorized adjudicative agency, and to take effect 5 years after the date of enactment. Beginning no later than 2 years after the enactment of IRTPA and ending on the date the plan took effect, authorized adjudicative agencies were to make a determination on at least 80 percent of all applications within an average of 120 days after receipt by an authorized investigative agency—not longer than 90 days to investigate and 30 days to adjudicate. In November 2005, the executive branch submitted a plan to improve the timeliness of personnel security clearance processes government-wide. The Joint Reform Team submitted its first reform plan to the President on April 30, 2008, which proposed a new process for determining clearance eligibility. Standards for Internal Control in the Federal Government establishes that management should define objectives clearly to enable the identification of risks and define risk tolerances. In our prior work on interagency collaboration, we found that overarching plans can help agencies overcome differences in missions, cultures, and ways of doing business, and help agencies better align their activities, processes, and resources to collaborate effectively to accomplish a commonly defined outcome. Additionally, to help sustain and enhance collaboration among federal agencies, we found that agencies that create a means to monitor, evaluate, and report the results of collaborative efforts can better identify areas for improvement. Further, we have found in previous work, including our prior work on personnel security clearances, that interim milestones can be used to show progress toward implementing efforts or to make adjustments when necessary. Developing and using specific milestones to guide and gauge progress toward achieving an agency’s desired results informs management of the rate of progress toward achieving goals, and whether adjustments need to be made in order to maintain progress within given time frames. While ODNI requests individual corrective action plans from agencies not meeting security clearance timeliness objectives, the executive branch has not developed a government-wide plan, with goals and interim milestones, to meet established timeliness objectives for initial security clearances that takes into consideration increased investigative requirements and other stated challenges. A coordinated approach, in addition to the ODNI-requested agency-specific plans, could help to improve timeliness, given that: (1) both agencies that use NBIB as their investigative service provider and those that have delegated authority to conduct their own investigations have experienced challenges in meeting established investigation and adjudication timeliness objectives over the past 5 fiscal years; and (2) timeliness challenges include government- wide challenges, such as the increased requirements stemming from the 2012 Federal Investigative Standards and past challenges in relation to OPM contractor support, as discussed above, and not just agency- specific challenges, such as staffing shortfalls. While the individual agency action plans represent a positive step toward helping to improve timeliness, agencies across the executive branch continue to experience timeliness challenges. A government-wide plan would better position ODNI to identify and address any systemic issues. Without a government- wide plan, including goals and interim milestones, for achieving timeliness objectives for initial secret and top secret investigations and adjudications—similar to the plan previously required by IRTPA—there could be continued delays in determining individuals’ eligibility for access to classified information. Ultimately, such delays may leave agencies unable to fill critical positions that require a security clearance. Current Timeliness Reporting Provides Limited Transparency and Oversight of the Reform Effort Since 2011, the executive branch’s reporting on the timeliness of personnel security clearances has provided limited transparency and oversight of the overall reform effort. Specifically, IRTPA required the executive branch to submit an annual report, through 2011, to the appropriate congressional committees on the progress made toward meeting the act’s requirements, including timeliness data and a discussion of any impediments to the smooth and timely functioning of its requirements. With respect to timeliness data, the act required that those reports include the periods of time required by the authorized investigative agencies and authorized adjudicative agencies for conducting investigations, adjudicating cases, and granting clearances, from date of submission to ultimate disposition and notification to the subject and the subject’s employer. In response to this requirement, the executive branch provided a series of reports from 2006 through 2011 on the timeliness of executive branch agencies’ initial investigations and periodic reinvestigations. For example, ODNI’s IRTPA Title III Annual Report for 2010 specified the average number of days by quarter it took for selected individual agencies to initiate, investigate, adjudicate, and complete the end-to-end process for the fastest 90 percent of security clearances. The report also included average timeliness data for the executive branch as a whole. However, since the IRTPA requirement ended in 2011, executive branch reporting has been limited. For example, as previously discussed, the PAC did not begin its quarterly reporting on the timeliness of executive branch agencies’ personnel security clearances until the second quarter of fiscal year 2014 through the Insider Threat and Security Clearance Reform cross-agency priority goal. In addition, while these reports include the timeliness of both initial investigations and periodic reinvestigations, they provide the average timeliness of the executive branch as a whole and not the timeliness of individual executive branch agencies—as was provided under the prior IRTPA reporting—which makes it difficult to identify specific agencies that may be experiencing challenges. Additionally, the Intelligence Authorization Act for Fiscal Year 2010 requires the President to submit an annual report on security clearance determinations to Congress. Among other things, the report is to include, for the preceding fiscal year, the number of federal and contractor employees who held a security clearance at each level and the number of employees who were approved for a security clearance at each level, as well as in-depth security clearance determination timeliness information for each element of the intelligence community. However, the annual reports that ODNI provides to the congressional intelligence committees in response to this requirement include only limited data as compared with reports that were completed in response to IRTPA. Specifically, the Intelligence Authorization Act for Fiscal Year 2010 requires information on the total amount of time for the longest and shortest determinations, and the age of pending investigations, not average timeliness. The reports are also limited in that they capture data for only a portion of the intelligence community. Specifically, ODNI’s 2015 Annual Report on Security Clearance Determinations states that the report includes information for 7 of 15 elements of the intelligence community and that the other 8 elements reported that collecting the information would be a manual, resource-intensive process that was not viable due primarily to technology restrictions. Standards for Internal Control in the Federal Government states that management should externally communicate the necessary quality information to achieve the entity’s objectives through reporting lines so that external parties can help the entity achieve its objectives and address related risks. In addition, our high-risk criteria for monitoring and demonstrated progress call for agencies to report on program progress and related risks as well as show that issues are being effectively managed. However, since the IRTPA annual reporting requirement ended in 2011, the executive branch has provided limited reporting on the timeliness of individual agencies’ initial investigations or periodic reinvestigations for personnel security clearances. In addition, while the PAC had regularly reported publicly on timeliness for the executive branch as a whole on a quarterly basis, it has not provided a public quarterly status update since the fourth quarter of fiscal year 2016. According to performance.gov, the website through which the PAC distributes its quarterly updates, the content—including the PAC’s quarterly updates—is undergoing an overhaul as agencies develop updated goals and objectives for release in February 2018 with the President’s next budget submission to Congress. It is unclear whether the new administration will continue to designate personnel security clearance reform as a cross-agency priority goal. PAC Program Management Office officials stated that they continue to track and report this information internally within the executive branch. These officials stated that they were uncertain as to whether performance.gov would remain a vehicle by which they would report on the status of the reform effort, including executive branch-wide timeliness. However, the officials also stated that it is important for the information to be reported in order to maintain transparency and the momentum of the reform effort. Without transparent reporting by the executive branch on investigation and adjudication timeliness for both initial investigations and periodic reinvestigations, Congress will not be able to effectively execute its oversight role and monitor individual executive branch agency progress in meeting timeliness objectives. In addition, the absence of comprehensive reporting on personnel security clearance timeliness limits the ability of congressional decision makers to thoroughly evaluate and precisely identify where and why delays exist within the process, as well as to identify corrections as necessary. In addition, should the PAC’s quarterly progress updates be suspended indefinitely, Congress and the public will have limited transparency into the status of key reform effort initiatives, which may delay the timely identification of problems, and ultimately disrupt the momentum of the reform effort as a whole. NBIB Has Taken Steps to Improve the Background Investigation Process but Faces Operational Challenges in Addressing the Investigation Backlog and Workforce Planning The transition from FIS to NBIB has involved organizational changes intended to improve the background investigation process, but the bureau faces operational challenges in addressing the investigation backlog and associated workforce planning. NBIB’s organizational changes include the creation of some new departments, and DOD is now responsible for designing, developing, and maintaining a new IT system for the bureau, but must contend with risks posed by vulnerabilities in OPM’s legacy IT systems, which NBIB still utilizes. As NBIB transitions, it has taken steps to improve its oversight of background investigations contracts and measure the completeness of background investigations; however, it faces operational challenges in developing a plan to reduce the size of the investigation backlog to a manageable level and in ensuring that its overall workforce is sized and structured to meet its mission. Establishment of NBIB Involved Organizational Changes, the Designation of Oversight Roles, and Transfer of IT Responsibilities to DOD The transition from FIS to NBIB involved some organizational changes, such as the creation of new departments designed to enhance information sharing and contract oversight, among other things. NBIB also made changes to existing departments, such as enhancing its counterintelligence division to foster greater collaboration with the intelligence community. In addition, NBIB is subject to oversight from multiple entities, such as OPM, ODNI, and the PAC. Further, DOD is now responsible for designing, developing, and maintaining a new IT system for NBIB that can provide increased security. However, vulnerabilities in OPM’s legacy systems pose risks to the security of the new system and could delay its implementation. Transition from FIS to NBIB Involved Changes to Organizational Structure NBIB was established to replace FIS, and the transition has involved changes to the organizational structure. In response to the results of 90- day review that were announced in January 2016, in September 2016, Executive Order 13741 amended Executive Order 13467 to establish the roles and responsibilities of NBIB within OPM and made the Director of NBIB a member of the PAC. According to Executive Order 13467, as amended, NBIB is to serve as the primary executive branch service provider for background investigations for, among other things, eligibility for access to classified information; eligibility to hold a sensitive position; suitability or fitness for government employment; and authorization to be issued a federal credential for logical and physical access to federally controlled facilities or information systems. Among other things, the bureau is to also provide effective, efficient, and secure personnel background investigations for the federal government. When announcing the establishment of NBIB, in January 2016, the administration reported the intention to create a dedicated transition team headquartered in Washington, D.C., to develop and implement a transition plan to: (1) stand up the bureau; (2) ensure that the transition timeline fully aligns with business needs; (3) transition the management of FIS IT systems to DOD; (4) migrate the existing mission, functions, personnel, and support structure of FIS to NBIB; and (5) provide continuity of service to customer agencies during the transition. According to its charter, the transition team was composed of current OPM employees, and federal employees detailed or assigned to OPM or DOD from other executive branch agencies and departments. NBIB officials noted that employees from across the executive branch with relevant experience and qualifications were recruited to ensure that stakeholder agencies’ equities were represented, and that the transition team leader was recruited from outside of OPM and reported directly to the OPM Director throughout the transition process. OPM reported that NBIB became operational on October 1, 2016, but that the complete transition will take some time. For example, the transition plan specifies activities throughout fiscal year 2017 and into fiscal year 2018 to implement the transition from FIS to NBIB. NBIB officials said they expect that the bureau will have migrated to the new organizational structure substantially by mid-2018. The transition also involved some organizational changes intended to streamline certain business processes or more effectively manage background investigations as the organization has continued to evolve. NBIB officials stated that the transition team established the organizational structure by assessing essential FIS functions in coordination with key community stakeholders—including new and external customers—through the PAC as well as FIS personnel. The officials said that the transition team then linked similar functions and interdependencies to establish each of the offices. Additionally, NBIB officials stated that the 2015 90-day review helped to determine the organizational structure because it identified a need for a business process reengineering analysis. Through its establishment, NBIB absorbed FIS and assumed its mission. NBIB’s organizational structure has several changes from the structure of FIS, to include the establishment of the following four new departments: 1. Federal Investigative Records Enterprise. The functions of this department include a new law enforcement and records outreach group to improve outreach and more effectively collect information with state and local law enforcement offices. 2. Policy, Strategy and Business Transformation. The functions of this department include expanding existing performance reporting to incorporate metrics regarding effectiveness; and researching and identifying systemic issues in workload, processes, and products to determine where process improvement could be achieved. 3. Contracting and Business Solutions. The functions of this department include enhancing and consolidating administration of NBIB contracts to provide consistent oversight. 4. Information Technology Management Office. The functions of this department include supporting the delivery and enhancement of quality IT systems to NBIB in a timely and effective manner, gathering and communicating needs and requirements for new applications, and coordinating implementation of changes to current systems. In addition to the creation of these new departments, NBIB also made changes to several other departments from FIS. For example, according to NBIB documents, the Field Operations department added a “Field Contracts” division that is designed to oversee and monitor the contractor workforce performing background investigations, to ensure quality and timely products. This department also enhanced its counterintelligence division to focus on counterintelligence and insider threat support and to foster greater collaboration with the intelligence community. Further, NBIB created a new financial office to oversee budgeting, pricing and funding models, financial reporting, data accuracy, and internal controls monitoring. Moreover, NBIB created a new Integrity Assurance, Compliance, and Inspection division by merging the FIS Integrity Assurance and Inspection divisions to streamline similar functions and improve processes and efficiencies. Executive Order 13741 provided some guidelines governing the structure and location of NBIB. Specifically, it required that NBIB be headquartered in or near Washington, D.C., and that NBIB have dedicated resources, including but not limited to a senior privacy official. NBIB’s headquarters is located in Washington, D.C., but according to NBIB officials, as of July 2017, only 48—including both occupied and vacant positions—of NBIB’s 3,260 positions, or about 1.5 percent, were located in Washington, D.C. In addition, although the position of the senior privacy official has been established in the NBIB organization chart, according to NBIB officials, this position had not been filled as of July 2017. NBIB officials explained that they work closely with OPM’s senior privacy officer, and so they decided to prioritize filling other leadership positions within NBIB. NBIB Subject to Oversight from OPM, ODNI, and the PAC NBIB is subject to oversight from multiple entities, such as OPM, ODNI, and the PAC. Executive Order 13741 provided that the bureau would be established within OPM. NBIB officials stated that the bureau is part of OPM and is governed in a manner consistent with its other operational components. They also said that although the structure of NBIB is different from that of FIS, its general relationship with OPM and its leadership reporting chain are similar. Specifically, comparing the organizational charts of FIS and NBIB, FIS was led by an Associate Director who reported to the Director of OPM, while NBIB is led by a Director who reports to the Director of OPM. According to NBIB, the OPM Director has delegated certain authorities to NBIB; additionally, the OPM Senior Procurement Executive delegated to NBIB certain administrative and acquisition authorities. NBIB officials said that this makes its structure more flexible. NBIB officials said that where support is provided from other OPM offices—such as communications, legislative affairs, legal, procurement, security, facilities, and the office of the Chief Information Officer—there is continual dialogue between that office’s leadership and the staff directly supporting the bureau. The officials also noted a variety of regular meetings, such as a weekly meeting between the Acting Director of OPM and the NBIB Director and Chief of Staff, attendance at daily OPM senior staff meetings, and briefings every other month with the OPM Inspector General, among others. In addition, as previously discussed, as the Security Executive Agent, the Director of National Intelligence is responsible for various matters related to security clearance investigation oversight, programs, policies, and processes. Executive Order 13467, as amended by Executive Orders 13741 and 13764, provides that NBIB, through the Director of OPM, is subject to the oversight of the Security Executive Agent with respect to the conduct of investigations for eligibility for access to classified information or to hold a sensitive position. Similarly, Executive Order 13467, as amended, provides that NBIB is responsible for conducting background investigations in accordance with policies, procedures, standards, and requirements established by the Security Executive Agent and Suitability Executive Agent. In February 2017, the Acting Director of OPM testified that the bureau has been working closely with ODNI to identify policy and process changes to address the investigation backlog. NBIB officials stated that the bureau and ODNI are active partners, and that the bureau participates in many of ODNI’s working groups in the development of policies or processes related to personnel security clearances. In addition, the officials said that the bureau reports timeliness, quality, and performance metrics to ODNI on no less than a quarterly basis, and that its personnel collaborate with ODNI on reviews of processes, such as those related to social media, continuous evaluation, insider threat, and counterintelligence. ODNI officials told us that in its oversight role of NBIB, ODNI collects quarterly timeliness data and requests that agencies using NBIB as their investigative service provider enter the investigations into the QART to assess the quality of the investigations. Further, Executive Order 13467, as amended by Executive Order 13741, describes an oversight relationship between the PAC and NBIB. It requires the PAC to hold NBIB accountable for the fulfillment of the bureau’s responsibilities set out in the Executive Order. It further provides that NBIB is to provide the PAC with information, to the extent permitted by law, on matters of performance, timeliness, capacity, IT modernization, continuous performance improvement, and other relevant aspects of NBIB operations. PAC Program Management Office officials told us that they worked with NBIB during the transition from FIS and answered a lot of questions, and have helped to fill in staffing and organization holes that were identified by the transition team. DOD Is Building and Managing a New Security Clearance IT System for NBIB, but Security Concerns May Delay Planned Milestones for the New System Executive Order 13467, as amended, assigns the Secretary of Defense the role of developing and securely operating IT systems that support all background investigation processes conducted by NBIB. According to officials from the Office of the DOD Chief Information Officer (CIO), NBIS will be built to NBIB specifications, and OPM will remain the owner of the data and processes. In testimony before the House Oversight and Government Reform Committee in February 2017, the DOD CIO estimated that NBIS would have several “prototype” capabilities by the end of fiscal year 2017, and an initial capability covering the full investigative process sometime in the fourth quarter of 2018. According to DOD officials, full capability for NBIS is scheduled for some time in 2019. However, a NBIB official noted the existence of challenges regarding the IT infrastructure and stated that it is more realistic for NBIS to be fully operational in 2020. According to DOD CIO officials, unexpected complications have arisen since beginning development of NBIS. Specifically, these officials stated that they have discovered that NBIS may require many more inter- connections to OPM legacy systems than originally planned. According to these officials, NBIB will continue to rely on OPM legacy systems for investigations of any complexity until NBIS becomes fully operational. Further, according to DOD CIO officials, when the executive branch begins to use NBIS, complex background investigations would begin in NBIS’s electronic application, but would then need to pass through or draw data from multiple OPM legacy systems before returning to NBIS for adjudication. According to DOD CIO officials, since OPM has 43 back- office functions fed by various systems that are often inter-related, a simple one-to-one system swap of NBIS for an OPM legacy system is not feasible. DOD CIO officials stated that the project management team building NBIS is currently working to fully understand how OPM’s various back-office functions are tied together, and also evaluating the cyber- security risks inherent in connecting to OPM’s legacy systems. DOD CIO officials explained that this connection, as well as logistical challenges associated with data migration from the legacy systems to NBIS, raises concerns about risks to NBIS. Until these risks are properly evaluated, any connection to the legacy systems could present vulnerabilities, according to DOD CIO officials. OPM officials disagreed, stating that OPM and DOD already have IT connection points with the OPM legacy systems, and that the security of OPM’s systems and data continues to be an OPM priority. Securing the legacy systems will be a joint effort by DOD and OPM, according to an October 2016 Memorandum of Agreement between the two agencies regarding the roles, responsibilities, and expectations of each party throughout the entire lifecycle of OPM’s use of DOD’s IT systems in support of the background investigation process. Under the agreement, OPM will retain ownership and responsibility for the operation and performance of all system authorization activities for OPM legacy systems throughout their lifecycle. The agreement provides that OPM will maintain security documentation and information and interconnection exchange agreements, own control selection and security role assignment processes, and perform risk executive functions. The memorandum further states that the security of the legacy OPM IT environment will be a joint effort between OPM and DOD, with DOD assisting in a comprehensive security assessment of all OPM legacy IT systems and related infrastructure on a reimbursable basis. According to DOD CIO and NBIB officials, there is close coordination on a technical level between the two agencies on securing the OPM legacy systems used by NBIB. The officials said that weekly coordination meetings are held between the two agencies, and that DOD has embedded staff at OPM who are under the direct supervision of the OPM CIO. Both GAO and the OPM Inspector General have raised concerns on multiple occasions about various aspects of IT security at OPM, including OPM legacy systems used by NBIB. For example, in August 2017, we reported on OPM’s progress in implementing 19 recommendations made by the United States Computer Emergency Readiness Team to bolster its information security practices and controls in the wake of the 2015 breaches. We found that, as of May 2017, OPM had fully implemented 11 of the recommendations. For the remaining 8 recommendations, actions for 4 were still in progress, and for the other 4, OPM indicated it had completed actions to address them, but we noted further improvements were needed. We further reported that since the 2015 data breaches, which included a compromise of OPM’s systems and files related to background investigations for 21.5 million individuals, OPM has made progress in improving its security to prevent, mitigate, and respond to data breaches involving sensitive personal records and background investigations information. However, we also found that OPM did not effectively monitor actions taken to remediate identified weaknesses. OMB requires agencies to create a Plan of Action and Milestones to track efforts to remediate identified weaknesses, such as those leading to the 19 recommendations made by the United States Computer Emergency Readiness Team. In addition, OPM’s policy requires that scheduled completion dates be included in the plan. The policy also requires a system’s Information System Security Officer to develop a weakness closure package containing evidence of how an open Plan of Action and Milestones has been remediated before the issue, or recommendation in this case, can be closed. Although OPM has a Plan of Action and Milestones to address the 19 recommendations, we found that it had not validated actions taken in a timely manner or updated completion dates in the plan. Because the United States Computer Emergency Readiness Team recommendations are intended to improve the agency’s security posture, we recommended that more timely validation of the effectiveness of the actions taken is warranted. Until closure packages are created and the evidence of such actions is validated, OPM has limited assurance that the actions taken have effectively mitigated vulnerabilities that can expose its systems to cybersecurity incidents. Additionally, in May 2016 we reported on the implementation of OPM’s information security program and the security of selected high-impact systems. We found that OPM, one of four agencies reviewed, had implemented numerous controls to protect selected systems, but that access controls had not always been implemented effectively. We reported that weaknesses also existed in patching known software vulnerabilities and planning for contingencies, and that an underlying reason for these weaknesses was that OPM had not fully implemented key elements of its information security program. We recommended that OPM fully implement key elements of its program, including addressing shortcomings related to its security plans, training, and system testing. According to OPM officials, the agency is taking actions to address these recommendations. In August 2016, we issued a restricted version of our May 2016 report that identified vulnerabilities specific to each of the two systems we reviewed and made recommendations to resolve access control weaknesses in those systems. In December 2016, OPM indicated its concurrence with the recommendations and provided timeframes for implementing them. OPM officials expressed concern that the information from our 2016 reports was now dated, stating that it no longer reflects the current security posture at OPM, and said that they had taken actions to address these recommendations. However, all of the recommendations directed to OPM from the two reports remained open as of November 2017. We had not received any documentation regarding these actions as of November 2017 and thus could not validate the extent that any of these recommendations have been addressed. OPM’s Office of the Inspector General has also raised related concerns, most recently in its October 2017 report on OPM’s security program and practices. Overall, the OPM Inspector General found that OPM’s cybersecurity maturity level was measured at a level 2, “Defined”, meaning that its policies, procedures and strategy were formalized and documented but were not consistently implemented. According to the report, OPM has made improvements in its security assessment and authorization program, and its previous material weakness related to authorizations is now considered a significant deficiency for fiscal year 2017. The report noted that there are still widespread issues related to system authorizations, primarily related to documentation inconsistencies and incomplete or inadequate testing of the systems’ security controls. In addition, the report identified a significant deficiency in OPM’s information security management structure, and found that OPM was not making substantial progress in implementing prior Inspector General recommendations. The report noted that OPM had only closed 34 percent of its findings issued in the past 2 years. In addition to these IT security concerns, funding uncertainties have also complicated the development of NBIS. The President’s fiscal year 2017 budget included $95 million for the development of the system; however, according to DOD CIO officials, of the $95 million that was appropriated, DOD had provided only $31 million for NBIS as of June 2017. According to DOD CIO officials, the fiscal year 2017 continuing resolution had complicated decisions about the funding and disbursement schedule with consequences for planning and the apportioning of resources. A draft funding profile covering fiscal years 2017-2023 estimates funding needs of $175.7 million for research, development, test and evaluation, and $709.4 million for operation and maintenance, over this 7-year period, for a total of $885.2 million. NBIB Has Taken Steps to Improve Operations but Faces Workforce Challenges As NBIB transitions, it has taken steps to improve its operations but continues to face workforce challenges that may hinder its ability to address the backlog of investigation cases and strengthen the background investigation process. The bureau has taken positive steps to improve its oversight of background investigation contracts, including changing contract oversight processes and measuring the completeness of background investigations. However, it faces operational challenges in developing a plan to reduce the size of the investigation backlog and in ensuring that its overall workforce is sized and structured to address it. OPM Has Taken Steps to Improve Oversight of Background Investigation Contracts Contractors are responsible for about 60 percent of NBIB’s background investigation fieldwork, according to NBIB officials. Since 2014, OPM has taken steps to improve its oversight of contracts. NBIB officials stated that changes were made in response to OPM Inspector General recommendations, and that some others were made in response to lessons learned after issues that led to the loss of OPM’s largest fieldwork contractor in 2014. These changes included (1) having federal employees review all background investigation reports, (2) increasing the number of individuals responsible for monitoring contractors’ compliance with contractually established requirements, and (3) establishing a contracting activity within NBIB. Since February 2014, federal employees have reviewed 100 percent of background investigation reports produced by contractors. In contrast, prior to February 2014, federal employees at FIS or a support contractor would review a subset of all of the investigations before releasing them to the respective customer agencies for adjudication. As currently structured, NBIB officials stated that there are now about 350 federal employees within NBIB’s Quality Oversight department who conduct these reviews for both contractor- and federal investigator-conducted cases to determine whether an investigation meets investigative standards for completeness before being released to the customer agency for adjudication. Using an internal database, OPM reviewers identify what, if any, elements of the investigative reports are incomplete and do not meet standards, and they return cases to the investigators for rework as necessary. When OPM reviewers determine that a case meets investigative standards, they close the case and submit it to an adjudicator. Contractors are evaluated for quality performance based on the number of times a case is returned by OPM reviewers for rework as a percentage of the total number of cases completed. According to NBIB data from its internal quality database, the percentage of cases conducted by contractors requiring rework decreased between the last quarters of fiscal years 2014 and 2016 from about 6 percent to 3.2 percent. According to NBIB officials, in 2014, OPM established an independent inspections branch to help the agency’s contracting officer’s representatives (CORs) oversee the background investigation fieldwork contracts. CORs, who are designated in writing by contracting officers, assist in the technical monitoring or administration of a contract. Under NBIB’s current background investigation fieldwork contracts, the COR provides technical direction and control during contractor performance, monitors contract progress, and determines for payment approval purposes whether performance is acceptable with respect to content, quality of services and materials, cost, and timeliness. NBIB officials stated that prior to the establishment of the inspections branch, the CORs were responsible for monitoring all aspects of contract compliance as well as a range of administrative duties, such as tracking performance data, IT support, and billing. Under the current NBIB structure, 16 inspectors in the Integrity Assurance, Compliance and Inspections division focus on contract oversight, according to NBIB officials. In addition to the inspectors, the officials said that there are 17 CORs—one in the Integrity Assurance, Compliance and Inspections division and 16 in the Field Operations department. Additionally, according to NBIB officials, FIS, NBIB’s predecessor, did not have its own contracting division, and instead relied on OPM’s centralized Office of Procurement Operations for contracting support. NBIB’s new organizational structure includes a Contracting and Business Solutions department. According to NBIB officials, they filled the new Head of Contracting Activity position in January 2017. NBIB officials stated that OPM established this new position and department in an effort to strengthen the bureau’s contracting function by creating dedicated positions more narrowly focused on overseeing the contracting function for background investigations and support services. NBIB Has Taken Steps to Measure Completeness of Background Investigations NBIB has developed quality assurance processes and tools to measure the completeness of its investigations. Specifically, NBIB has developed an internal quality database through which federal case reviewers can determine the completeness of investigations, in accordance with investigative standards, that are being produced by both its federal and contract investigators, and can rate cases as either “meets standards” or “below standards.” Cases that are marked as “below standards” are returned to the contractor for rework prior to being finalized and sent to the customer for adjudication. NBIB then monitors, through its Key Performance Indicators, the percentage of investigations that are returned by customer agencies and that NBIB agrees require additional work. Our prior work found that relying on agencies to provide information on investigation quality, by itself, may not provide an accurate reflection of the quality of background investigations. We have reported in the past that officials from several agencies have stated that to avoid further costs or delays, agencies often choose to perform additional steps internally to obtain missing information, clarify or explain issues identified in investigative reports, or gather evidence for issue resolution or mitigation. As recently as July 2017, DOD officials stated that issue resolution was still a concern for them. However, NBIB officials stated that they conduct background investigations in accordance with the Federal Investigative Standards, and that while adjudicators may want more or different details, these are considered outside the scope of background investigations, but can be provided on a case-by-case basis. NBIB Leadership Has Not Developed a Plan to Reduce the Investigation Backlog NBIB leadership has not developed a plan to reduce the size of the investigation backlog to a manageable level. NBIB’s Key Performance Indicators report states that a “healthy” inventory of work, representing approximately 6 weeks of work and allowing NBIB to meet timeliness objectives, is around 180,000 pending investigations. According to NBIB, the backlog of pending investigations increased from about 190,000 in August 2014, before OPM decided not to exercise subsequent option periods for its largest investigative fieldwork contract at the time, to more than 709,000 investigations as of September 2017, as shown in figure 5. NBIB estimated the backlog grew at an average rate of about 3,600 investigations each week from October 2016 through July 2017. As we reported when placing DOD’s personnel security clearance program on the high-risk list, problems related to backlogs and the resulting delays in determining clearance eligibility and issuing initial clearances can result in millions of dollars of additional costs to the federal government, longer periods of time needed to complete national security-related contracts, lost opportunity costs if prospective employees decide to work elsewhere rather than wait to get a clearance, and diminishing quality of the work because industrial contractors may be performing government contracts with personnel who have the necessary security clearances but are not the most experienced and best-qualified personnel for the positions involved. Delays in renewing previously- issued clearances can lead to heightened risk of national security breaches because the longer individuals hold a clearance, the more likely they are to be working with critical information and systems. As the backlog has grown, NBIB has taken steps to increase its capacity to conduct background investigations by increasing its own investigator staff as well as awarding new contracts, effective in December 2016, to four contractors for investigation fieldwork services. NBIB officials said that NBIB has a goal to increase its total number of investigators—federal employees and contractors—to about 7,200 by the end of fiscal year 2017. Specifically, to help address the backlog, NBIB officials reported that NBIB increased its authorized federal investigator workforce by adding 400 federal investigator positions in fiscal year 2016 and 200 positions in fiscal year 2017—an increase from 1,375 to 1,975 authorized positions. As of July 2017, NBIB had filled 1,620 of the 1,975 positions, and 1,513 of its federal investigators were fully trained. NBIB officials explained that they do not plan to increase the federal investigator capacity beyond the currently approved 1,975 because they do not have the ability to absorb more staff. According to the officials, new investigators must be trained by experienced investigators which reduces the amount of time the experienced investigators have to conduct investigative work. When estimating federal investigator capacity, NBIB assumes it will have 277 full-time equivalent vacancies at any given time due to high attrition rates. Further, NBIB officials could not project the federal investigator workforce past April 2018 due to high attrition rates. Given challenges with increasing its federal investigative staff, NBIB continues to rely on contractors to conduct the majority of investigations. NBIB officials noted that contractors perform about 60 percent of NBIB’s total investigative cases. OPM awarded four new investigative fieldwork services contracts that became effective in December 2016—two to incumbent contractors and two to new vendors. In July 2017, OPM officials told us that the contractor and federal staff capacity they currently possess enables them to complete a sufficient number of investigations to prevent the number of pending investigations from increasing further. However, they acknowledged that the four contracts and federal investigator staff do not currently provide OPM enough capacity to reduce the pending number of investigations to the “healthy” inventory level of 180,000 cases. NBIB officials have conducted analyses to determine how changes in the total number of investigators could affect the backlog over time, accounting for current and projected investigator capacity, prior time studies, historical data, geographic location, and other factors. Specifically, NBIB officials assessed four scenarios, from the status quo— assuming no additional contractor or federal investigator hires—to an aggressive contractor staffing plan beyond January 2018, but in July 2017 they determined that the aggressive plan was not feasible. The two scenarios that NBIB identified as most feasible would not result in a “healthy” inventory level until fiscal year 2022 at the earliest. For example, under one scenario, each contractor would increase investigator capacity under current staffing projections through early 2018. Assuming that the contractors adhere to these projections, NBIB would have the capacity to address incoming cases and begin to reduce the backlog, but the backlog would not reach a “healthy” inventory level until sometime after fiscal year 2022. However, NBIB leadership has not determined whether the costs and benefits of any one scenario are preferable to the costs and benefits of the others. Standards for Internal Control in the Federal Government establishes that management should clearly define objectives to enable the identification of risks and define risk tolerances. In addition, our high-risk criteria for capacity call for agencies to ensure they have the capacity, in terms of people and resources, to address and resolve risks. We have also found in previous work that milestones can be used to show progress toward implementing efforts, or to make adjustments when necessary. Developing and using specific milestones to guide and gauge progress toward achieving an agency’s desired results informs management of the rate of progress toward achieving goals or whether adjustments need to be made in order to maintain progress within given timeframes. However, NBIB leadership has not established goals or milestones for reducing the size of the investigation backlog, or goals for increasing total investigator capacity—for both federal employees and contractor personnel. As a result, the value of NBIB’s backlog analysis is limited, because it is not part of a broader plan to address the backlog and achieve timeliness objectives. Further, the extent to which NBIB should adjust its investigator capacity in the future remains unclear, as the currently projected capacity levels are not tied to any established goals or milestones to address the backlog or achieve the timeliness objectives. In addition to increasing investigative capacity, NBIB personnel are attempting to decrease the backlog by making the background investigation process more effective and efficient. To do so, NBIB conducted a business process reengineering effort that was intended to identify challenges in the process and their root causes. This effort identified 57 challenges, which were divided into five main categories that affected multiple phases of the background investigation process. NBIB then developed five portfolios, with 21 initiatives, to address the identified challenges. For example, one of the categories of challenges was poor data quality at the start of the investigation, which was described as related to issues such as no auto-validation of information, no pre- population of forms, and variable quality of submissions. NBIB developed four initiatives related to automation and digitization to improve the quality of this information. NBIB officials said that this business process reengineering effort is working to reduce the investigative level of effort across the community. Specifically, NBIB officials cited efforts that have been implemented to reduce the number of personnel hours necessary to complete an investigation, such as centralizing interviews and using video-teleconferencing for overseas investigations (to decrease travel time), automated record checks, and focused writing (to make reports more succinct and less time-consuming to prepare). However, NBIB has not identified how the implementation of the business process reengineering effort will affect the backlog or the need for additional investigators in the future. Without a plan, including goals and milestones, for reducing the backlog, which includes a determination of the effect of the business process reengineering efforts on the backlog, NBIB will lack the information and a course of action needed to effectively manage the inventory of pending investigations it conducts on behalf of other executive branch agencies. Further, without establishing goals for increasing total investigator capacity—for both federal employees and contractor personnel—in accordance with the plan for reducing the backlog, NBIB may not be positioned to achieve the goals and milestones outlined in that plan. Ultimately, if NBIB is unable to reduce the backlog, executive branch agencies will continue to lack the cleared personnel needed to help execute their respective missions, which could decrease the agencies’ overall effectiveness and efficiency, and pose risks to national security. NBIB Has Identified Some Workforce Needs but Does Not Have a Strategic Workforce Plan Our review of NBIB planning and workforce documents indicates that it has taken workforce planning steps. For example, the bureau developed a transition plan to help guide the transition from FIS to NBIB. This plan includes a request for a personnel study for its new Contracting and Business Solutions department to determine any needs or realignment of resources, skills, or qualification gaps; however, the transition plan does not mention a personnel study to address the needs of any other departments within NBIB. NBIB officials stated that the bureau conducted this study in early fiscal year 2017, and those results are being used to build the Contracting and Business Solutions department. NBIB officials said that NBIB plans to conduct a personnel study for its other departments once there is greater clarity and direction regarding the conduct of background investigations as a result of the plan developed by DOD to conduct its own investigations and any subsequent direction from Congress and the Administration. The officials stated that the personnel study was needed for the contracting department because this work had not been done in NBIB before and so they needed to establish a baseline for staffing it. As previously discussed, section 951 of the National Defense Authorization Act for Fiscal Year 2017 required, among other things, the Secretary of Defense to develop an implementation plan for the Defense Security Service to conduct background investigations for certain DOD personnel—presently conducted by OPM—after October 1, 2017. Additionally, in November 2017, as this report was in its final stages, Congress passed a bill for the National Defense Authorization Act for Fiscal Year 2018, which includes a provision that, among other things, would authorize DOD to conduct its own background investigations. It would also require DOD to begin carrying out the implementation plan developed in response to section 951 by October 1, 2020. The legislation would further require the Secretary of Defense, in consultation with the Director of OPM, to provide for a phased transition of the conduct of investigations from NBIB to the Defense Security Service. Moreover, this legislation would require the Secretary of Defense to conduct a comprehensive assessment of workforce requirements for both DOD and NBIB as part of planning for the transfer of certain functions from OPM to DOD. In addition, the NBIB transition team developed a talent acquisition strategy for the establishment of the bureau; however, this strategy was focused solely on filling nine key leadership positions (according to NBIB officials, four positions are senior executive service positions, and five are general schedule grade 14 and 15 positions). As of July 2017, NBIB officials said that six of these positions had been filled, and that another position was in the process of being staffed. The only mention of other positions in this strategy was a statement that once these key leadership positions have been filled, executives should build their respective departments consistent with mission needs and aligned with the NBIB strategic plan, and that NBIB use current FIS leadership for field operations, engagements and customer service, and integrity assurance. According to NBIB officials, NBIB has 3,260 positions authorized by OPM but had 495 vacancies as of July 1, 2017—approximately a 15 percent vacancy rate. NBIB officials said that most positions were not affected by the recent executive-branch hiring freeze, including investigators and investigative assistants, because they qualified for national security waivers; however, some positions, such as administrative support, were not covered by the waivers. The greatest total number of vacancies within NBIB is in its field operations department, which as of July 2017 had almost 400 vacancies, or a vacancy rate of about 17 percent. The Field Operations department provides contractor oversight, including program and project managers for fieldwork and CORs; it also includes federal investigator staff. NBIB officials stated that their greatest challenge in filling vacancies has been with their investigative workforce, and that as they fill their investigator positions, they will be able to better perform their mission of delivering completed background investigations in a timely manner due to having greater capacity. NBIB officials told us that they plan to hire another 200 federal investigators in fiscal year 2017 to help address the backlog of investigations; however, hiring 200 new federal investigator positions was not listed as a step on the transition plan for the Field Operations department, and these new investigator positions also are not included in the planned new hires listing of personnel hiring priorities. NBIB officials said that these new investigator positions were not included in the transition plan because the decision to hire for these positions had already been made and the hiring was being executed when the transition plan was developed. Furthermore, NBIB has developed detailed plans to hire new personnel. NBIB’s listing of personnel hiring priorities showed that NBIB initially planned to hire 155 new personnel. NBIB officials explained that in developing this initial hiring plan, organizational leaders assessed OPM legacy resources that would align with NBIB’s mission, roles and responsibilities, and identified gaps. These officials stated that at a leadership offsite held in December 2016, small groups identified existing and notional resources, prioritized resource gaps for identified programs, and briefed out their assessment of priorities. These officials said that the offsite participants then selected the top priorities for fiscal years 2017 and 2018, and that NBIB leadership subsequently developed individualized proposals outlining revisions and changes to personnel requirements and organization of each of the program areas. NBIB officials said that they subsequently refined these plans and reduced the number of planned new hires. The officials stated that in 2017, NBIB established a transitional hiring committee to further prioritize and select the final NBIB personnel structure, and that through a series of meetings in March, May, and June 2017, they refined their plans to reduce the number of planned new positions. As of July 2017, they said that NBIB planned to create and fill 49 new positions. According to NBIB officials, 13 of the new positions would involve an increase to the budget. Of those 49 new positions, they said that 21 had been filled as of July 2017. In addition, NBIB uses contractor support to fill some positions in its Field Operations, Federal Investigative Records Enterprise, and customer service departments, but NBIB officials did not provide documentation explaining the determinations for which tasks should be performed by contractors versus federal employees. NBIB officials stated that they followed a deliberative process requiring a thoughtful assessment of the personnel resource skills and competencies required to address the new NBIB objectives, but they could not provide any supporting documentation to that effect. A key principle of strategic workforce planning is determining the critical skills and competencies needed to achieve current and future programmatic results, such as identifying how the agency will obtain the workforce skills and competencies that are critical to achieving its strategic goals. In addition, OPM’s workforce planning best practices include forecasting the optimal headcount and competencies needed to meet the needs of the organization in the future, and a gap analysis to identify headcount surpluses and deficiencies for current and future demand levels. Further, OMB policy requires agencies to take actions to ensure they have sufficient internal capability to maintain control over functions that are core to the agency’s mission and operations. However, NBIB officials were unable to provide us with documentation that identified any of the gaps or explained the rationale for its determinations about the specific number and positions of additional staff needed. The documents they did provide appeared to be summaries of the revisions and changes decided upon, and included detailed information about the identified staffing requirements, such as information about the number of positions, position titles and types, grade levels, and hiring priority. While this information reflects detailed planning and thought, it does not illuminate whether the quantities and types of positions identified are the appropriate positions with the right critical skills and competencies needed to address any gaps in the bureau’s workforce. NBIB officials said that the hiring plans were originally determined at the leadership offsite, where the rationale for the specific number and positions of additional staff was discussed orally, and then further refined at a series of meetings beginning in March 2017. The officials told us that extensive review went into determining the rationale for the requests for new staff, and that these requests were the subject of robust and sometimes contentious debate, after which the requests were voted on by senior leadership. Although NBIB has taken some steps to develop and implement certain strategic workforce planning elements, it has not created a comprehensive, formal workforce plan that is focused on workforce needs to reduce the backlog. Such a plan should include the workforce skills and competencies that are critical to achieving NBIB’s strategic goals. As we previously reported, the most important consideration in identifying needed skills and competencies is that they are clearly linked to the agency’s mission and long-term goals developed jointly with key congressional and other stakeholders during the strategic planning process. If an agency identifies staff needs without linking the needs to strategic goals, or if the agency has not obtained agreement from key stakeholders on the goals, the needs assessment may be incomplete and premature. In addition, a strategic workforce plan could enable NBIB to (1) develop hiring, training, staff development, succession planning, performance management, use of flexibilities, and other human capital strategies and tools that could be implemented with the resources that can be reasonably expected to be available; and (2) eliminate gaps and improve the contribution of critical skills and competencies that they have identified between the future and current skills and competencies needed for mission success. NBIB officials explained that a strategic workforce plan is something they should create, but that as a new organization the bureau was focused on other priorities, such as hiring a director, selecting the headquarters location, addressing the backlog, and filling vacant positions. However, after being operational for almost a year, NBIB still lacks a comprehensive workforce plan. While it has taken several other steps intended to strengthen the background investigation process, without a formal strategic workforce plan, NBIB does not know whether the identified needs in its new hires, transition plan, and overall workforce vacancies will provide the appropriate mix of personnel. Specifically, it does not know whether it has the appropriate mix of federal employees and contractors, with the right critical skills and competencies, to address any staffing gaps and better enable the bureau to fulfill its mission. A comprehensive strategic workforce plan that focuses on the workforce and organizational elements needed and addresses capacity issues related to its vacancies would better position NBIB to address its investigation backlog. Additionally, a comprehensive strategic workforce plan would better position the bureau to execute its roles and responsibilities related to overseeing the background investigations for DOD and other executive branch agencies that rely on NBIB as their investigative service provider. Conclusions The PAC has made progress in reforming the personnel security clearance process. However, 13 years after the passage of IRTPA, it is now at a crossroads. The backlog of background investigations totaled over 700,000 cases as of September 2017 and while the executive branch is taking actions to help address it, there are no indications that the government can readily do so. We have noted in prior work concerns about the quality of background investigations and have emphasized the need to build quality throughout the personnel security clearance process for nearly two decades. Even though it has made significant attempts, the executive branch has still not established government-wide performance measures for the quality of background investigations to help ensure that critical security-relevant information is identified and mitigated when granting a security clearance. Over the past 2 years, the executive branch has taken steps toward establishing such measures. However, ODNI, as the Security Executive Agent, and the PAC have not prioritized setting a milestone for their completion. Without a milestone for establishing government-wide performance measures for the quality of investigations, their completion may be further delayed, and executive branch agencies will not have a schedule against which they can track progress or to which they are accountable. Executive branch timeliness in completing initial secret and initial top secret clearances has declined over the past 5 years. While ODNI has taken some steps to correct this downward trend on an agency-by- agency basis, neither ODNI nor the PAC have led a government-wide approach to improve the timeliness of initial personnel security clearances. While ODNI requests that agencies submit corrective action plans when they are not meeting timeliness objectives, it has not developed a comprehensive, government-wide plan with goals and milestones. A government-wide plan would help position ODNI, as the Security Executive Agent, as well as the PAC, to better identify and address systemic issues across the executive branch that affect the ability of agencies to meet timeliness objectives. IRTPA also created greater transparency and oversight of the overall reform effort by mandating annual reports to the appropriate congressional committees on the progress made toward meeting the act’s requirements, including reporting timeliness data. However, since the IRTPA reporting requirement ended in 2011, executive branch reporting has been limited, which makes it difficult to thoroughly evaluate and precisely identify where and why delays exist within the process, as well as to direct corrections as necessary. Without transparent reporting on investigation and adjudication timeliness, for both initial investigations and periodic reinvestigations, Congress will not be able to effectively execute its oversight role and monitor individual executive branch agency progress in meeting timeliness objectives. The establishment of NBIB in 2016, to strengthen the background investigation process, involved a number of organizational changes and efforts to improve the process. While NBIB has taken steps to increase its investigative capacity, it faces challenges in developing a comprehensive plan, with goals and milestones, to address the investigation backlog. Without such a plan, NBIB lacks a necessary course of action to reduce the backlog to a manageable level. Relatedly, NBIB has not established goals for increasing total investigator capacity. Establishing such goals, in accordance with the plan for reducing the backlog, may better position NBIB to achieve the goals and milestones outlined in that plan. Ultimately, if NBIB is unable to reduce the investigation backlog, executive branch agencies will continue to lack the cleared personnel needed to help execute their respective missions, which poses potential risks to national security. Demonstrated leadership from ODNI, in the capacity as the Security Executive Agent, and the PAC, by assisting NBIB as it works to reduce the investigation backlog could better position NBIB to reach a manageable level of investigations. Additionally, NBIB faces operational challenges related to workforce planning. While the bureau has taken a number of workforce planning steps, such as identifying specific hiring needs, it has not developed a strategic workforce plan. As a result, it may not know whether it has planned for the appropriate mix of personnel, with the right critical skills and competencies, and it has experienced delays in addressing its hiring needs. A comprehensive strategic workforce plan that focuses on the workforce and organizational elements needed and addresses capacity issues related to its vacancies would better position NBIB to address its investigation backlog and strengthen the investigation process. Matter for Congressional Consideration Congress should consider reinstating the Intelligence Reform and Terrorism Prevention Act of 2004’s requirement for the executive branch to report annually to appropriate committees of Congress on the amount of time required by authorized investigative and adjudicative agencies to conduct investigations, adjudicate cases, and grant initial personnel security clearances. Congress should also consider adding to this reporting requirement the amount of time required to investigate and adjudicate periodic reinvestigations and any other information deemed relevant, such as the status of the investigation backlog and implementing government-wide measures for the quality of investigations or other reform efforts. (Matter for Consideration 1) Recommendations for Executive Action We are making a total of six recommendations, including three to ODNI, in coordination with the PAC, and three to NBIB. Specifically, The Director of National Intelligence, in his capacity as Security Executive Agent, and in coordination with the other Security, Suitability, and Credentialing Performance Accountability Council Principals—the Deputy Director for Management of OMB in his capacity as Chair of the PAC, the Director of OPM, and the Under Secretary of Defense for Intelligence—should take the following three actions: establish a milestone for the completion of government-wide performance measures for the quality of investigations; (Recommendation 1) conduct an evidence-based review of the investigation and adjudication timeliness objectives for completing the fastest 90 percent of initial secret and initial top secret security clearances, and take action to adjust the objectives if appropriate; (Recommendation 2) and develop a government-wide plan, including goals and interim milestones, to meet those timeliness objectives for initial personnel security clearance investigations and adjudications. (Recommendation 3) The Director of NBIB, in coordination with the Deputy Director for Management of OMB, in the capacity as Chair of the PAC, and the Director of National Intelligence, in the capacity as Security Executive Agent, should take the following two actions: develop a plan, including goals and milestones, that includes a determination of the effect of the business process reengineering efforts for reducing the backlog to a “healthy” inventory of work, representing approximately 6 weeks of work; (Recommendation 4) and establish goals for increasing total investigator capacity—federal employees and contractor personnel—in accordance with the plan for reducing the backlog of investigations. (Recommendation 5) The Director of NBIB should build upon NBIB’s current workforce planning efforts by developing and implementing a comprehensive strategic workforce plan that focuses on what workforce and organizational needs and changes will enable the bureau to meet the current and future demand for its services. (Recommendation 6) Agency Comments and Our Evaluation We provided a draft of this report to OMB, ODNI, OPM, DOD, the Department of Justice, and the Department of Homeland Security for review and comment. OMB provided its comments via email, and the comments are summarized below. Written comments from ODNI and OPM are reprinted in their entirety in appendixes V and VI, respectively. OMB, ODNI, OPM, and the Department of Homeland Security provided additional technical comments, which we incorporated in the report as appropriate. DOD and the Department of Justice did not provide comments. OMB and OPM concurred with the recommendations directed to them. ODNI stated that it did not concur with the report’s conclusions and recommendations, but did not specifically state with which recommendations it did not concur. In comments e-mailed to us on November 9, 2017, the Acting Deputy Director for Management of OMB concurred with the report’s findings, conclusions, and recommendations. The comments also stated that the administration is committed to renewing public reporting of security clearance timeliness, once the government-wide reform initiatives are announced in early 2018, either as one of the administration’s cross- cutting priority goals or via another approach. While the PAC’s prior public reporting on the status of security clearance reform efforts was beneficial and helped to provide for transparency of the process, we believe that security clearance timeliness information should be reported—whether publicly or via reporting to Congress—broken out by individual executive branch agency and not only as an executive branch-wide average, as noted in our Matter for Congressional Consideration. As discussed in the report, such detailed reporting could help congressional decision-makers and OMB to thoroughly evaluate and precisely identify where and why delays exist within the process, as well as to direct corrections as necessary. In addition, OMB stated that the PAC is committed to ensuring that its Implementation Plan is continually updated to reflect the current status of reform efforts and that it incorporates any new initiatives arising from our review. In its written comments, ODNI stated that the report appears to draw negative inferences from the facts and that the conclusions do not present an accurate assessment of the current status of the personnel security clearance process. ODNI also stated that the conclusions do not include the significant progress ODNI has achieved in coordination with executive branch agencies. We disagree with these statements. The report discusses in detail the progress that the PAC—of which ODNI is a Principal member—has made to reform the personnel security clearance process, including the implementation of recommendations and milestones from the 120-day and 90-day reviews, and cross-agency priority goal updates. The report also discusses areas of progress highlighted by ODNI officials, such as the development of seven Security Executive Agent Directives, the issuance of Quality Assessment Standards for background investigations, and the implementation of the QART. In its comments, ODNI further stated that while it generally concurred with the factual observations in the report, it did not concur with our recommendations. While ODNI did not specifically state with which recommendations it disagreed, it discussed each of the three recommendations addressed to it. In addition, ODNI stated that it did not concur with our conclusions, and provided specific observations in the following three areas, which lead to the three recommendations. First, ODNI disagreed with our conclusion that it has not prioritized setting a milestone for the completion of government-wide performance measures for the quality of background investigations. ODNI also stated that the report ignores significant progress that ODNI has made in this area; specifically, the approval of Quality Assessment Standards for background investigations and the implementation of the QART. We disagree with ODNI’s position, as the report discusses in detail both the Quality Assessment Standards and the QART, and identifies these as the two steps toward the development of performance measures for the quality of background investigations. Additionally, ODNI stated that it has the ability to determine trends in background investigative quality from the data collected by the QART. However, as we note in the report, DOD background investigations—which represent the majority of the investigations conducted by NBIB—are not captured by the QART. We further noted that according to NBIB officials, they are not positioned to receive comprehensive feedback if their largest customer, DOD, is not utilizing the QART. Therefore, as we concluded in the report, it is unclear how ODNI will have sufficient data to develop government-wide measures for the quality of investigations since it will lack data for a significant portion of the executive branch’s background investigations. Regarding our recommendation that the Director of National Intelligence, in coordination with the other PAC Principals, establish a milestone for the completion of government-wide performance measures for the quality of investigations, ODNI stated that it is premature to do so and that it will set a milestone once the QART metrics discussed above have been fully analyzed. However, in its written comments, ODNI did not state when it anticipates the QART metrics will be fully analyzed. We recognize that fully analyzing the QART data may take time and that initial performance measures may be refined as ODNI collects and assesses data regarding the quality of background investigations. However, setting a milestone— that takes into consideration the amount of time needed to analyze QART data—will help to ensure that the analysis is completed, that initial performance measures are developed, and that agencies will have a greater understanding of what they are being measured against. We identify in the report that the executive branch previously set a milestone for the completion of government-wide performance measure for quality, which was adjusted over time and most recently set as October 2015. We further identify that the PAC has set milestones for the completion of nearly 50 other initiatives in its Implementation Plan, and that in the aftermath of the 2013 Washington Navy Yard shooting, the PAC (which includes ODNI) issued a 120-day review report that, among other things, recommended reporting on measures for quality. We continue to believe that setting a milestone could help to prevent further delays to their completion and provide the executive branch with a schedule against which it would be accountable. Second, ODNI did not agree with our conclusion that neither ODNI nor the PAC have led a government-wide approach to improve timeliness of initial personnel security clearances. In its written comments, ODNI discusses actions it has taken to improve timeliness since the passage of IRTPA, including resetting timeliness goals for certain clearances in 2012, in coordination with interagency stakeholders, issuing annual memorandums to agencies on their performance, and requesting that agencies develop agency-specific corrective action plans. We discuss all of these actions in the report and while we agree that they are positive actions, the executive branch would further benefit from a more coordinated approach. For example, even with the cited actions, the executive branch is experiencing significant challenges related to the timely processing of initial personnel security clearances. Specifically, as discussed in the report, in fiscal year 2016, only 2 percent of the agencies for which ODNI provided timeliness data met the 40-day IRTPA- established investigation objective for at least three of four quarters for the fastest 90 percent of initial secret cases; and only 12 percent met ODNI’s revised investigation objective for at least three of four quarters for the fastest 90 percent of initial top secret cases. In addition, as discussed in the report, timeliness challenges are not only an issue for agencies that use NBIB as their investigative service provider. Agencies with delegated authority to conduct their own investigations have also experienced timeliness challenges over the past 5 fiscal years. Further, the timeliness challenges cited by agencies to GAO include government- wide challenges, such as the increased investigative requirements—not just agency-specific challenges, such as staffing shortfalls. A government- wide plan would better position ODNI to identify and address any systemic government-wide issues. Regarding our recommendation that the Director of National Intelligence, in coordination with the other PAC Principals, conduct an evidence-based review of the timeliness objectives for completing initial secret and initial top secret security clearances, and take action to adjust the objectives if appropriate, ODNI stated that it is premature to revise the existing timeliness goals until NBIB’s backlog is resolved. In its written comments, ODNI states that while timeliness has exceeded the established standards, this is not necessarily an indication of a flaw in timeliness goals, but an indicator of the impact of the backlog and that as such, the current challenge in meeting timeliness should not serve as the sole basis for modifying existing goals. Our recommendation is to conduct an evidence-based review of the timeliness objectives, through which ODNI could determine whether there are any issues with the timeliness goals or, as ODNI suggests, whether the timeliness challenges are just a reflection of the backlog. At the conclusion of that review, ODNI can determine if it is appropriate to adjust the timeliness objectives, and take action if necessary. We do not suggest that ODNI should immediately revise the timeliness objectives without first determining if there is an evidence-based need to do so. ODNI further notes that other agencies that are not supported by NBIB are still achieving or very close to achieving current standards. However, as discussed in the report, even agencies with delegated authority to conduct their own investigations are experiencing challenges meeting established timeliness objectives. ODNI further stated in response to our recommendation that the Director of National Intelligence will continue to assess the impact of the implementation of the 2012 Federal Investigative Standards and modify the timeliness goals as appropriate. Given that ODNI has not comprehensively revisited the investigation or adjudication timeliness objectives for initial security clearances since 2012 despite the increased investigative requirements stemming from the implementation of the 2012 Federal Investigative Standards, we continue to believe that our recommendation to conduct an evidence-based review, using relevant data, is valid. Third, ODNI disagreed with our conclusion that demonstrated leadership from ODNI, in the capacity as the Security Executive Agent, and the PAC, by assisting NBIB as it works to reduce the investigation backlog could better position NBIB to reach a manageable level of investigations. ODNI stated that it has demonstrated leadership in this area and has worked closely as the Security Executive Agent with NBIB to reduce its investigation backlog and noted recent efforts by the Director of National Intelligence and the other PAC Principals to help reduce the backlog. We believe that these recent actions, which have taken place since the completion of our review, are positive steps that, along with our recommendations to NBIB, could help to reduce the backlog of background investigations. However, as discussed in the report, prior to these recent actions, ODNI had not demonstrated the leadership necessary to improve executive branch timeliness, as evidenced by the decrease in the number of agencies meeting timeliness objectives from fiscal years 2012 through 2016 and a backlog of over 700,000 investigations as of September 2017. Additionally, while the recent actions could help to reduce the backlog, sustained demonstrated leadership by the Director of National Intelligence and the other PAC Principals will be crucial to maintaining and increasing momentum, and ultimately critical to comprehensively addressing the current timeliness challenges and reducing the investigation backlog. Regarding our recommendation that the Director of National Intelligence develop a government-wide plan, including goals and interim milestones, to meet timeliness objectives for initial personnel security clearances, ODNI stated that it has already established timeliness goals for the security clearance process and that prior to the investigation backlog, which was created, in part, due to a loss of OPM investigator capacity, the executive branch met those goals. ODNI further stated that until NBIB reduces its backlog, departments and agencies that use NBIB cannot accurately predict budgetary requirements for the phases of the security clearance process under their control, which complicates the development of a government-wide plan at this time. However, as discussed in the report, the most feasible date by which NBIB could reduce the backlog of background investigations to a “healthy” inventory level is fiscal year 2022 at the earliest. Given the significant timeliness challenges that the executive branch is currently experiencing, agencies would benefit from developing a government-wide plan now, rather than waiting at least 5 years for the reduction of the backlog to do so. In addition, through the development of a government-wide plan, ODNI could help to identify additional actions to more quickly reduce the investigation backlog. Without such a plan, continued delays in processing clearances may leave agencies unable to fill critical positions that require a security clearance. Ultimately, developing a government- wide plan, including goals and interim milestones, will better ensure timely determinations of individuals’ eligibility for access to classified information. As such, we continue to believe that the recommendation is valid. In its written comments, OPM concurred with the three recommendations directed to NBIB, and described some actions it plans to take to address them. Separate from the recommendations, OPM also provided comments related to the discussion in the draft report regarding DOD’s development of NBIS and the security of OPM’s IT systems and data. Specifically, OPM expressed concerns about some of the statements by DOD officials, stating that they were unverified opinions. We agree that including the countering views of OPM officials could provide some helpful context. As a result, we have added language to the report to include OPM’s perspectives on the statements made by the DOD CIO officials. In addition, OPM stated that the prior GAO and OPM Inspector General audits referenced in the IT discussion were outdated audit assessments. We agree that some information in the draft report from the prior audits was based on reports from 2016 or earlier in 2017, and we understand that circumstances may have changed since those reports were issued. Specifically, the OPM Inspector General released a new audit report in October 2017, when this report was with the agency for comment, regarding the state of security of OPM IT systems. Accordingly, we replaced the discussion of the older OPM Inspector General reports in the draft report with a discussion of the OPM Inspector General’s October 2017 report. This latest OPM Inspector General report found, among other things, that OPM had made improvements in its security assessment and authorization program, and its previous “material weakness” related to authorizations has been upgraded to a “significant deficiency” for fiscal year 2017. Overall, the OPM Inspector General found that OPM’s cybersecurity maturity level was measured at a level 2, “Defined”, meaning that its policies, procedures and strategy were formalized and documented, but were not consistently implemented. We also added language to emphasize the date of the 2016 GAO reports, and added information about the status of the recommendations from those two reports, because none of the recommendations directed to OPM from the two 2016 GAO reports had been closed as implemented as of November 2017. OPM further stated that it has implemented critical enhancements to strengthen the security of OPM’s networks and has improved its security and assessment authorization process. In the draft report, we stated that OPM has strengthened the security of its networks, and we noted that— as stated in our 2017 report—OPM has made progress in improving its security to prevent, mitigate, and respond to data breaches involving sensitive personal records and background investigations information. However, as we noted our 2017 report, we also found that OPM did not effectively monitor actions taken to remediate identified weaknesses, and we continue to believe that discussion of the deficiencies we identified in our prior reports is appropriate in this report. In November 2017, after the conclusion of our audit work, Congress passed a bill for the National Defense Authorization Act for Fiscal Year 2018. Among other things, the bill includes a provision that would authorize DOD to conduct its own background investigations and require DOD to begin carrying out the implementation plan required by section 951 of the National Defense Authorization Act for Fiscal Year 2017 by October 1, 2020. It would also require the Secretary of Defense, in consultation with the Director of OPM, to provide for a phased transition. While this pending legislation may affect how some background investigations are conducted, we believe that our recommendations remain important points on which the executive branch should focus in order to help improve the security clearance process as these legislative changes are implemented. We are sending copies of this report to the appropriate congressional committees, the Director of National Intelligence, the Secretary of Defense, the Director of OMB, the Secretary of Homeland Security, the Director of OPM, the Director of NBIB, the Attorney General of the United States, the Director of the Federal Bureau of Intelligence, and the Director of the Bureau of Alcohol, Tobacco, Firearms, and Explosives. In addition, this report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your members of your staff have any questions regarding this report, please contact me at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in appendix VII. Appendix I: Status of Prior GAO Personnel Security Clearance Recommendations to Executive Branch Agencies as of November 2017 Since May 2009, we have made 37 recommendations to appropriate executive branch agencies—the Office of Management and Budget (OMB), Office of Personnel Management (OPM), Office of the Director of National Intelligence (ODNI), Department of Defense (DOD), and Department of Homeland Security (DHS)—to improve the personnel security clearance process. As of November 2017, these agencies had implemented 12 of those recommendations; we closed 4 due to the inaction of the responsible agencies; and 21 remained open. Examples of implemented recommendations include DOD’s issuance of adjudication guidance related to incomplete investigative reports, ODNI and OPM’s jointly proposed chapter and part to the Code of Federal Regulations clarifying, among other things, the position sensitivity designation of national security positions, and DHS’s issuance of new standards for tracking information on security clearance revocations and appeals. The 21 recommendations that remain open as of November 2017 focused on different aspects of the personnel security clearance process. First, in February 2012, we reported on background investigation pricing and costs, and we found, among other things, that the Performance Accountability Council had not provided the executive branch with guidance on cost savings. Second, in September 2014, we reported on the security clearance revocation processes at DHS and DOD. We found that DHS and DOD data systems did not track complete revocation information; there was inconsistent implementation of the requirements in the governing executive orders by DHS, DOD, and some of their components; and there was limited oversight over the revocation process, among other things. Third, in April 2015, we reported on the status of government-wide security clearance reform efforts. We found, among other things, that limited progress had been achieved in implementing updated Federal Investigative Standards, and that the extent to which reciprocity is granted government-wide was unknown. Fourth, in November 2017, we found that ODNI had taken an initial step to implement continuous evaluation across the executive branch, but it had not yet determined key aspects of the program; and it lacked plans for implementing, monitoring, and measuring program performance. See table 2 for the 21 open recommendations from these four reports as of November 2017. Appendix II: Overview of Selected Personnel Security Clearance Provisions in the Intelligence Reform and Terrorism Prevention Act of 2004 (IRTPA) Appendix II: Overview of Selected Personnel Security Clearance Provisions in the Intelligence Reform and Terrorism Prevention Act of 2004 (IRTPA) The 2004 enactment of IRTPA initiated a reform effort that includes goals and requirements for improving the personnel security clearance process government-wide. Specifically, among other things, IRTPA required that: The President select a single entity—currently designated as the Office of the Director of National Intelligence—to be responsible for, among other things, the development and implementation of uniform and consistent policies and procedures to ensure the effective, efficient, and timely completion of security clearances. The President, in consultation with the head of the entity above, select a single agency—currently designated as the National Background Investigations Bureau within the Office of Personnel Management (OPM)—tasked with conducting, to the maximum extent practicable, security clearance investigations of federal employees and contractor personnel, among other things. It also required this entity to ensure that investigations are conducted in accordance with uniform standards and requirements. All security clearance background investigations and determinations completed by an authorized investigative agency or authorized adjudicative agency be accepted by all agencies (known as reciprocity), subject to certain exceptions. Not later than 12 months after the date of enactment of the act, the Director of OPM in cooperation with the heads of the entities selected above, establish and commence operating and maintaining an integrated, secure database of personnel security clearance information. The executive branch evaluate the use of available information technology and databases to expedite investigative and adjudicative processes and to verify standard information submitted as part of an application for a security clearance and, not later than 1 year after enactment, submit a report to the President and the appropriate committees of Congress on the results of that evaluation. The executive branch submit an annual report, through 2011, to the appropriate congressional committees on the progress made toward meeting IRTPA requirements, including timeliness data and a discussion of any impediments to the smooth and timely functioning of IRTPA requirements. IRTPA also established specific objectives for the timeliness of security clearance processing. Specifically, the act required the entity selected under section 3001(b) to develop a plan to reduce the length of the personnel security clearance process, in consultation with appropriate committees of Congress and each authorized adjudicative agency. To the extent practical, the plan was to require that each authorized adjudicative agency make a determination on at least 90 percent of all applications for a personnel security clearance within an average of 60 days after the date of receipt of the completed application by an authorized investigative agency—not longer than 40 days to complete the investigative phase and 20 days to complete the adjudicative phase. IRTPA required the plan to take effect December 17, 2009. Appendix III: GAO Work on Personnel Security Clearance Quality and Executive Branch Efforts to Establish Government-wide Measures for the Quality of Investigations Since 1999 we have reported on issues related to investigative quality at the Department of Defense and the Office of Personnel Management and have issued recommendations to help ensure the personnel security clearance reform effort results in the development of metrics to track quality. Figure 6 provides an overview of our work in this area and executive branch efforts to establish government-wide performance measures for investigation quality. Appendix IV: Timeliness of Executive Branch Periodic Reinvestigations In November 2017, we reported on the timeliness of the executive branch’s periodic reinvestigations for fiscal years 2012 through 2016, among other things. Our analysis of timeliness data for select executive branch agencies showed that the percent of agencies meeting timeliness goals decreased from fiscal year 2012 through 2016. The timeliness goals for periodic reinvestigations are outlined in a 2008 Joint Security and Suitability Reform Team report to the President entitled Security and Suitability Process Reform. Specifically, the report includes Office of Management and Budget-issued interim government-wide processing goals for security clearances for calendar year 2008. The calendar year 2008 government-wide goal for the fastest 90 percent of periodic reinvestigations is the same as the goal currently in place: 15 days to initiate a case, 150 days to conduct the investigation, and 30 days to adjudicate—totaling 195 days to complete the end-to-end processing of the periodic reinvestigation. Table 3 shows the percent of executive branch agencies meeting the timeliness goals for investigating, adjudicating, and completing the fastest 90 percent of periodic reinvestigations for at least three of four quarters from fiscal years 2012 through 2016. Specific details of the timeliness of initial secret and initial top secret clearances for select individual executive branch agencies were omitted because the information is sensitive. Appendix V: Comments from the Office of the Director of National Intelligence Appendix VI: Comments from the Office of Personnel Management Appendix VII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Kimberly Seay (Assistant Director), Nathan Tranquilli (Assistant Director), Renee S. Brown, Chris Businsky, Molly Callaghan, Jenny Chanley, Katheryn Hubbell, Saida Hussain, Jeffrey L. Knott, James Krustapentus, Caryn E. Kuebler, Michael Shaughnessy, Rachel Stoiko, Paul Sturm, John Van Schaik, Cheryl Weissman, and Jina Yu made significant contributions to this report. Related GAO Products Personnel Security Clearances: Additional Actions Needed to Address Quality, Timeliness, and Investigation Backlog. GAO-18-26SU. Washington, D.C.: December 7, 2017 (FOUO). Personnel Security Clearances: Additional Planning Needed to Fully Implement and Oversee Continuous Evaluation of Clearance Holders. GAO-18-159SU. Washington, D.C.: November 21, 2017 (FOUO). Personnel Security Clearances: Plans Needed to Fully Implement and Oversee Continuous Evaluation of Clearance Holders. GAO-18-117. Washington, D.C.: November 21, 2017. High-Risk Series: Progress on Many High-Risk Areas, While Substantial Efforts Needed on Others. GAO-17-317. Washington, D.C.: February 15, 2017. Personnel Security Clearances: Funding Estimates and Government- Wide Metrics Are Needed to Implement Long-Standing Reform Efforts. GAO-15-179SU. Washington, D.C.: April 23, 2015. Personnel Security Clearances: Additional Guidance and Oversight Needed at DHS and DOD to Ensure Consistent Application of Revocation Process. GAO-14-640. Washington, D. C.: September 8, 2014. Personnel Security Clearances: Actions Needed to Ensure Quality of Background Investigations and Resulting Decisions. GAO-14-138T. Washington, D.C.: February 11, 2014. Personnel Security Clearances: Actions Needed to Help Ensure Correct Designations of National Security Positions. GAO-14-139T. Washington, D.C.: November 20, 2013. Personnel Security Clearances: Opportunities Exist to Improve Quality Throughout the Process. GAO-14-186T. Washington, D.C.: November 13, 2013. Personnel Security Clearances: Full Development and Implementation of Metrics Needed to Measure Quality of Process. GAO-14-157T. Washington, D.C.: October 31, 2013. Personnel Security Clearances: Further Actions Needed to Improve the Process and Realize Efficiencies. GAO-13-728T. Washington, D.C.: June 20, 2013. Managing for Results: Agencies Should More Fully Develop Priority Goals under the GPRA Modernization Act. GAO-13-174. Washington, D.C.: April 19, 2013. Security Clearances: Agencies Need Clearly Defined Policy for Determining Civilian Position Requirements. GAO-12-800. Washington, D.C.: July 12, 2012. Personnel Security Clearances: Continuing Leadership and Attention Can Enhance Momentum Gained from Reform Effort. GAO-12-815T. Washington, D.C.: June 21, 2012. 2012 Annual Report: Opportunities to Reduce Duplication, Overlap and Fragmentation, Achieve Savings, and Enhance Revenue. GAO-12-342SP. Washington, D.C.: February 28, 2012. Background Investigations: Office of Personnel Management Needs to Improve Transparency of Its Pricing and Seek Cost Savings. GAO-12-197. Washington, D.C.: February 28, 2012. GAO’s 2011 High-Risk Series: An Update. GAO-11-394T. Washington, D.C.: February 17, 2011. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 16, 2011. Personnel Security Clearances: Overall Progress Has Been Made to Reform the Governmentwide Security Clearance Process. GAO-11-232T. Washington, D.C.: December 1, 2010. Personnel Security Clearances: Progress Has Been Made to Improve Timeliness but Continued Oversight Is Needed to Sustain Momentum. GAO-11-65. Washington, D.C.: November 19, 2010. DOD Personnel Clearances: Preliminary Observations on DOD’s Progress on Addressing Timeliness and Quality Issues. GAO-11-185T. Washington, D.C.: November 16, 2010. Personnel Security Clearances: An Outcome-Focused Strategy and Comprehensive Reporting of Timeliness and Quality Would Provide Greater Visibility over the Clearance Process. GAO-10-117T. Washington, D.C.: October 1, 2009. Personnel Security Clearances: Progress Has Been Made to Reduce Delays but Further Actions Are Needed to Enhance Quality and Sustain Reform Efforts. GAO-09-684T. Washington, D.C.: September 15, 2009. Personnel Security Clearances: An Outcome-Focused Strategy Is Needed to Guide Implementation of the Reformed Clearance Process. GAO-09-488. Washington, D.C.: May 19, 2009. DOD Personnel Clearances: Comprehensive Timeliness Reporting, Complete Clearance Documentation, and Quality Measures Are Needed to Further Improve the Clearance Process. GAO-09-400. Washington, D.C.: May 19, 2009. High-Risk Series: An Update. GAO-09-271. Washington, D.C.: January 2009. Personnel Security Clearances: Preliminary Observations on Joint Reform Efforts to Improve the Governmentwide Clearance Eligibility Process. GAO-08-1050T. Washington, D.C.: July 30, 2008. Personnel Clearances: Key Factors for Reforming the Security Clearance Process. GAO-08-776T. Washington, D.C.: May 22, 2008. Employee Security: Implementation of Identification Cards and DOD’s Personnel Security Clearance Program Need Improvement. GAO-08-551T. Washington, D.C.: April 9, 2008. Personnel Clearances: Key Factors to Consider in Efforts to Reform Security Clearance Processes. GAO-08-352T. Washington, D.C.: February 27, 2008. DOD Personnel Clearances: DOD Faces Multiple Challenges in Its Efforts to Improve Clearance Processes for Industry Personnel. GAO-08-470T. Washington, D.C.: February 13, 2008. DOD Personnel Clearances: Improved Annual Reporting Would Enable More Informed Congressional Oversight. GAO-08-350. Washington, D.C.: February 13, 2008. DOD Personnel Clearances: Delays and Inadequate Documentation Found for Industry Personnel. GAO-07-842T. Washington, D.C.: May 17, 2007. High-Risk Series: An Update. GAO-07-310. Washington, D.C.: January 2007. DOD Personnel Clearances: Additional OMB Actions Are Needed to Improve the Security Clearance Process. GAO-06-1070. Washington, D.C.: September 28, 2006. DOD Personnel Clearances: New Concerns Slow Processing of Clearances for Industry Personnel. GAO-06-748T. Washington, D.C.: May 17, 2006. DOD Personnel Clearances: Some Progress Has Been Made but Hurdles Remain to Overcome the Challenges That Led to GAO’s High-Risk Designation. GAO-05-842T. Washington, D.C.: June 28, 2005. High-Risk Series: An Update. GAO-05-207. Washington, D.C.: January 2005.
Why GAO Did This Study A high-quality personnel security clearance process is necessary to minimize the risks of unauthorized disclosures of classified information and to help ensure that security-relevant information is identified and assessed. The passage of IRTPA initiated an effort to reform the security clearance process government-wide. This report assesses the extent to which (1) executive branch agencies made progress reforming the security clearance process; (2) executive branch agencies completed timely initial clearances from fiscal years 2012-2016, and reported on timeliness; and (3) NBIB has taken steps to improve the background investigation process and address the backlog. GAO reviewed documentation; analyzed timeliness data; and interviewed officials from the four PAC Principals and NBIB. This is a public version of a sensitive report that GAO issued in December 2017. Information that the DNI and OPM deemed sensitive has been omitted. What GAO Found Executive branch agencies have made progress reforming the security clearance process, but long-standing key initiatives remain incomplete. Progress includes the issuance of common federal adjudicative guidelines and updated strategic documents to help sustain the reform effort. However, agencies face challenges in implementing certain aspects of the 2012 Federal Investigative Standards—criteria for conducting background investigations—including establishing a continuous evaluation program, and the issuance of a reciprocity policy to guide agencies in honoring previously granted clearances by other agencies remains incomplete. Executive branch agencies have taken recent steps to prioritize over 50 reform initiatives to help focus agency efforts and facilitate their completion. In addition, while agencies have taken steps to establish government-wide performance measures for the quality of investigations, neither the Director of National Intelligence (DNI) nor the Security, Suitability, and Credentialing Performance Accountability Council (PAC) have set a milestone for their completion. Without establishing such a milestone, completion may be further delayed and agencies will not have a schedule against which they can track progress or to which they are accountable. The number of executive branch agencies meeting established timeliness objectives for initial security clearances decreased from fiscal years 2012 through 2016, and reporting has been limited. For example, 59 percent of the executive branch agencies reviewed by GAO reported meeting investigation and adjudication timeliness objectives for initial top secret clearances in fiscal year 2012, compared with 10 percent in fiscal year 2016. The Intelligence Reform and Terrorism Prevention Act of 2004 (IRTPA) required the executive branch to submit an annual report, through 2011, to appropriate congressional committees on, among other things, the time required to conduct investigations, adjudicate cases, and grant clearances. Since the requirement ended, reporting has been limited to a portion of the intelligence community. Without comprehensive reporting, Congress will not be able to monitor agencies' progress in meeting timeliness objectives, identify corrections, or effectively execute its oversight role. The National Background Investigations Bureau (NBIB), within the Office of Personnel Management (OPM), has taken steps to improve the background investigation process, but it faces operational challenges in addressing the investigation backlog and increasing investigator capacity. While NBIB has taken positive steps to improve its oversight of background investigation contracts, it faces operational challenges in reducing the investigation backlog—which grew from 190,000 cases in August 2014 to more than 709,000 in September 2017. To increase capacity NBIB has hired additional federal investigators and increased the number of its investigative fieldwork contracts, but it has not developed a plan for reducing the backlog or established goals for increasing total investigator capacity. Without such a plan and goals, the backlog may persist and executive branch agencies will continue to lack the cleared personnel needed to help execute their respective missions. The bill for the National Defense Authorization Act for Fiscal Year 2018, passed by Congress in November 2017, would authorize DOD to conduct its own background investigations. What GAO Recommends Congress should consider reinstating the IRTPA requirement for clearance timeliness reporting. GAO is also making six recommendations, including that the DNI and other PAC Principals set a milestone for establishing measures for investigation quality, and that NBIB develop a plan to reduce the backlog and establish goals for increasing total investigator capacity. NBIB concurred with the recommendations made to it. The DNI did not concur with GAO's conclusions and recommendations. GAO continues to believe they are valid, as discussed in the report.
gao_GAO-19-226
gao_GAO-19-226_0
Background K-12 Public School Choice Public school choices typically include charter schools and magnet schools, as well as local-level options to transfer or choose among traditional public schools. CTE schools may provide an additional option for students seeking to develop or expand their employment opportunities, often in lieu of preparing for post-secondary education. Education, as well as national organizations that advocate on behalf of tribes, has noted that the flexibility associated with these options can also allow for increased tribal control and oversight of education for Indian students, and create opportunities to integrate knowledge, language, culture, and other aspects of Indian identity into the classroom, regardless of the type of school. Charter schools accounted for 6 percent of all public schools in school year 2015-16 (the school year with the most recent enrollment data available). As of that year, 43 states and the District of Columbia had enacted legislation to permit public charter schools, according to Education. The availability of magnet schools also differs across states and districts given that, in some cases, these schools are intended to eliminate, reduce, or prevent racial isolation in areas with substantial numbers of minority group students, according to Education’s Magnet Schools Assistance Program. In school year 2015-16, magnet schools accounted for 3 percent of all public schools. CTE schools are less common, representing 1 percent of all public schools in 2015-16. Indian Student Enrollment in K-12 Schools Approximately 505,000 Indian students attended K-12 public schools in school year 2015-16, representing 1 percent of all public school students, according to CCD data. The majority attended traditional public schools (see fig. 1). In addition to the half a million Indian students attending public schools, approximately 45,000 attended BIE schools in school year 2015-16, according to BIE enrollment data. BIE administers 185 schools on or near Indian reservations in 23 states. BIE schools are predominantly in rural communities, serve mostly low-income students, and receive almost all of their funding from federal sources. Students attending BIE schools generally must be members of federally recognized tribes, or descendants of members of such tribes, and reside on or near federal reservations. Indian students attend public schools in settings ranging from large urban areas to remote rural areas. According to CCD data, in school year 2015- 16, 58 percent of Indian students attended public schools in rural areas, while 42 percent attended public schools in urban areas (see fig. 2). Indian Student Academic Achievement Every 4 years, Education conducts the National Indian Education Study to provide in-depth information on the educational experiences and academic performance of Indian students in 4th and 8th grade. The study differentiates between public schools in which 25 percent or more of the students are Indian, public schools in which less than 25 percent of the students are Indian, and BIE schools. Data from the 2015 iteration, most recent available, showed that Indian students attending BIE schools consistently had the lowest math and reading scores in 8th grade, while Indian students attending public schools with lower percentages of Indian students consistently performed the best in these subjects (see fig. 3). Few School Districts with Large American Indian and Alaska Native Student Populations Offered Public School Choice Options Traditional Public Schools Were the Only Options in Most School Districts with High Percentages of American Indian and Alaska Native Students Few areas with high percentages of Indian students had options other than traditional public schools, according to our analysis of Education data for school year 2015-16. Of the 451 school districts with high percentages of Indian students in our analysis, 84 percent (378 districts) had only traditional public schools. The remaining 16 percent (73 districts) had at least one BIE school, charter school, magnet school, or CTE school. The most common option was a BIE school (see fig. 4). Among districts that had only traditional public schools, about three-quarters of them had more than one school. The presence of a school choice option or more than one traditional school in a given location does not mean that a given school is necessarily available to all Indian students in the area. This may be because of school-level factors such as enrollment caps, eligibility requirements, or grade levels offered, and environmental factors, such as limited transportation options. Indian students attend school in both urban and rural areas, though nearly all school districts with high percentages of Indian students were located in rural areas—99 percent compared to 1 percent located in urban areas. In addition, as shown in figure 5, school districts with high percentages of Indian students were generally located near tribal lands, and half of the 451 districts were located in Oklahoma. In these districts, there were a total of 119 BIE schools, 28 charter schools, 6 magnet schools, and 24 CTE schools. Most of the districts that had at least one charter, magnet, or CTE school were located in Arizona and New Mexico. See appendix II for detailed maps of the options available in school districts with high percentages of Indian students in select regions of the country. There are several reasons why there may be few public school options in districts with high percentages of Indian students. As previously noted, nearly all of these districts are in rural areas. Experts said there are often not enough students in rural areas to justify adding schools beyond the traditional public schools or BIE schools that already exist, and rural school districts can face challenges recruiting and retaining qualified teachers. We have also reported on how limited broadband internet access and poor road conditions on tribal lands can affect educational opportunities for Indian students in rural areas regardless of the type of school they attend. Districts with the Largest Number of American Indian and Alaska Native Students Had More Public School Options As previously noted, we also analyzed Education data from the 100 school districts with the largest number of Indian students. Some of these districts were located in large urban areas and a majority had at least one other option in addition to traditional public schools (see fig. 6). Of these 100 districts, 62 offered at least one option other than a traditional public school, with the most common option being charter schools (see fig. 7). With regard to the individual schools within the 100 districts with the largest number of Indian students, we found that 75 percent of the schools were concentrated in just 15 school districts. These 15 districts had the largest overall student enrollments and were in urban areas such as New York City, Los Angeles, and Albuquerque. As shown in figure 8, the majority of charter, magnet, and CTE schools were located in these 15 largest districts. In contrast, nearly all BIE schools were located in the 85 other districts. As noted previously, BIE schools are predominantly in rural areas, and serve students who reside on or near Indian reservations. Though school districts in urban areas offered more school choice options than school districts in non-urban areas, experts said Indian students in urban areas sometimes feel isolated in their schools and communities. They noted that Indian students often account for a small percentage of all students in large urban districts and their schools are less likely to have curricula that reflect their cultural identity or provide instruction on Native languages. In the 15 largest of the 100 districts in our analysis, Indian students represented less than 5 percent of all students in each district and in some cases represented as few as 0.2 percent. In the 46 urban school districts in the 100 districts with the largest number of Indian students, just 3 districts had an Indian student enrollment greater than 25 percent. American Indian and Alaska Native Student Enrollment in Public School Options Varied by School District Even when Indian students had more school choice options, there was no consistent enrollment pattern across districts with large numbers of Indian students. In about a quarter of the districts that had at least one charter school, Indian students enrolled in charter schools in similar percentages as non-Indian students. In the remaining districts, enrollment patterns varied. For example, in one school district near Boise, Idaho and another near Fairbanks, Alaska, Indian students attended charter schools at higher percentages than their peers by 60 percentage points and 6 percentage points, respectively. Whereas, in other districts, such as one district near Flagstaff, Arizona and another near Salt Lake City, Utah, Indian student enrollment in charter schools was lower than their peers by 18 percentage points and 6 percentage points, respectively. Similarly, Indian student enrollment in magnet schools varied across the 17 districts with those schools. In 10 of these districts, Indian students attended magnet schools at lower percentages than non-Indian students. For example, in one district near Sault Ste. Marie, Michigan and another district near Broward County, Florida, Indian student enrollment in magnet schools was lower than their peers by 12 percentage points and 3 percentage points, respectively. In the other 7 districts, Indian students attended magnet schools at higher percentages than non-Indian students. For example, in one district near Stockton, California and another near Minneapolis, Minnesota, Indian student enrollment in magnet schools was higher than their peers by 17 and 9 percentage points, respectively. Whether Indian students enrolled in different types of schools could be a function of local differences in school choice and could be influenced by the extent to which these schools offered curricula that reflect Indian languages, cultures, or histories. Experts with whom we spoke said that in some areas, tribes have more control over education for Indian students, which can increase the tribe’s ability to influence curricula and accountability metrics to help meet Indian students’ academic and non- academic needs. Experts further noted that many districts with high percentages of Indian students are located near tribal lands, which can offer Indian students living there greater access to culturally-relevant curricula and instruction in Native languages than their peers in urban locations. In 2015, the National Indian Education Study reported that in schools where Indian students represented at least one-quarter of the students, a higher percentage of Indian students reported knowledge of their heritage or reported they received instruction in Native languages compared to peers attending schools with lower percentages of Indian students. Several tribal leaders and experts in Indian education said that access to culturally-relevant curricula and language instruction is crucial to strengthening, rebuilding, and sustaining Indian cultures and communities. In addition, experts noted that tribes sometimes seek to operate or oversee schools for Indian youth. For example, Oklahoma allows federally recognized tribes to authorize charter schools. In other states with charter school legislation, experts told us that tribes often must work through state charter school authorizers if they wish to open charter schools. BIE officials and Indian education experts also said that areas with BIE schools offer opportunities for tribes to exercise more control over education by converting the school from BIE-operated to tribally- operated. One tribal leader said the tribe was exploring this option in order to increase the tribe’s autonomy over its students’ education. Education has federal-level program offices that provide support to states and school districts related to school choice generally and Indian education specifically. Education recently finalized changes to its Charter Schools Program that will give priority to grantees seeking funding opportunities that would specifically serve the educational needs of Indian students. Finally, some urban school districts with large numbers of Indian students have district-level offices designed to work directly with Indian students and their families and to liaise between the school district and nearby tribes. Access to these resources, among other things, may help Indian students and families select a school that will best meet the student’s academic and non-academic needs, according to Indian education experts we interviewed. Agency Comments and Our Evaluation We provided a draft of this report to the Department of Education (Education) for review and comment. We also provided a copy to the Department of the Interior’s Bureau of Indian Education (BIE). Education’s comments are reproduced in appendix III. Education also provided technical comments, which we incorporated as appropriate. In its written comments, Education suggested that, given the eligibility requirements to attend BIE schools, it is possible for Indian students to have greater access to educational choice than their non-Indian peers in some areas. This observation is consistent with the findings of our report, which showed that in school districts with high percentages of Indian students and school options, the most common option was a BIE school (see fig. 4). However, 84 percent of these districts offered only traditional public schools. Nearly all of these districts were located in rural areas and, as we reported, have few school options. Education expressed concern that our analysis does not appropriately reflect the full spectrum of education choice options available to Indian students, particularly private schools. They stated it would be helpful to understand how we determined that Education’s Private School Universe Survey (PSS) was not reliable for the purposes of mapping specific locations of private schools. We clarified our rationale in appendix I. Specifically, according to Education’s PSS survey documentation, the PSS was based on a sample of private schools, not the universe. The official in Education’s National Center for Education Statistics (NCES) who is responsible for the PSS told us that the PSS sample contained only about half of the private schools in the nation, which would not allow for comprehensive mapping of private schools. We further explored using the broader list of private schools from which Education draws the PSS sample. The PSS documentation shows that about 30 percent of this list (more than 10,000 entities) were not private schools. We confirmed this information with the same NCES official, who explained that entities that are not private schools are filtered out through NCES’s survey process. Based on our review of the PSS survey documentation and methods and our interviews with cognizant NCES officials, we determined that it would not be possible to use PSS data to comprehensively and accurately map the locations of these private schools nationally or in specific areas with large Indian student populations. However, as we indicated in the draft report on which Education commented, the PSS contains information on a large number of private schools and we determined that it can provide reliable data for some variables other than the specific locations of private schools, including the total number of students attending private schools disaggregated by race and ethnicity. As discussed, we used the PSS for such purposes in this report. In its comments, Education also encouraged us to further explore specific examples of school options that have a mission to address the unique educational needs of Indian students. We reviewed several relevant studies as part of our work, including some related to the sources Education suggested. However, an in-depth review of specific examples was outside the scope of our work. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Education, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (617) 788-0580 or nowickij@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology This report examines the public school choice options located in areas with large populations of American Indian and Alaska Native students, collectively referred to as Indian students. To conduct this work, we analyzed the Department of Education’s (Education) national data on K-12 public schools from the Common Core of Data (CCD) for school year 2015-16 (the most recent available). Education’s National Center for Education Statistics administers the CCD survey annually to collect a range of data from all public schools and districts in the nation, including student demographics (e.g., race or ethnicity) and school characteristics (e.g., school type, such as a charter or magnet school). State educational agencies supply these data for their schools and school districts. We determined the data we analyzed were sufficiently reliable for the purposes of this report by reviewing documentation, conducting electronic testing, and interviewing officials from Education’s National Center for Education Statistics. To inform all aspects of our work, we interviewed federal officials from Education, the Bureau of Indian Education (BIE), and the White House Initiative on American Indian and Alaska Native Education. We interviewed or received input from representatives from several organizations that represent or advocate on behalf of Indian students and tribes, such as the National Indian Education Association, the National Advisory Council on Indian Education, the National Congress of American Indians, and the Tribal Education Departments National Assembly. We also heard from some tribal leaders who provided non-generalizable perspectives on Indian education, school choice, and academic achievement. We met with academic subject matter experts, as well as other relevant nonfederal organizations, such as ExcelinEd, the National Alliance for Public Charter Schools, and the U.S. Conference of Catholic Bishops, to discuss issues related to school choice for Indian students. Defining Areas with Large Populations of Indian Students We focused our analyses on two subsets of public school districts with large Indian student populations, as follows: 1. Public school districts in which Indian students accounted for 25 percent or more of all students in the district. We refer to school districts that met this threshold as having a “high percentage” of Indian students. It is consistent with Education’s definition of a “high- density” school for Indian students which the agency uses in its National Indian Education Study. 2. The top 100 public school districts by number of Indian students enrolled. We refer to school districts that met this threshold as having the “largest number” of Indian students. This threshold allowed us to examine school choice in areas where large numbers of Indian students live, but may not represent a high percentage of all students. Education has similarly reported CCD data for the 100 public school districts with the largest number of students enrolled. School Types in Our Analysis The CCD collects data on public school type in two ways: 1. Schools are categorized as regular public schools, special education schools, career and technical education schools, or alternative/other schools based on the school’s curriculum or population served. See table 1 for definitions for each of these categories. 2. In addition to the above categories, schools can have additional statuses, which are not mutually exclusive. These statuses include magnet school, charter school, and virtual school. See table 2 for definitions for each of these school statuses. Because the CCD collects public school type data in two ways, we sorted schools based on the combination of school types and statuses to develop distinct categories for our analysis. Table 3 outlines the combinations of CCD school type and status, along with the corresponding category we used in our analysis. For reporting purposes, we used the term “traditional school” in place of “regular school” to be consistent with our prior reports on K-12 education issues that analyzed the CCD and other Education datasets. In addition to the school types listed above, we included BIE schools in our analysis because they may provide a unique school option in some areas with large populations of Indian students. Data on the location of BIE schools were captured in the 2015-16 CCD. BIE also provided us with enrollment data for its schools, which we reviewed to determine that the presence of BIE schools did not affect our analysis of Indian student enrollment in other types of schools. We focused our analysis on (1) traditional public schools, (2) charter schools, (3) magnet schools, (4) career and technical education schools, and (5) BIE schools. Traditional public schools provided a baseline from which to compare other school choices in a given school district. We referred to the other four school types as “school choice options” collectively. We considered a school district as having school choice options if the district included at least two schools in total, and offered at least two of the five school types in our analysis. We compared school districts with school choice options to school districts that had only traditional public schools. In school districts with high percentages of Indian students, there were no schools that reported having both charter and magnet school status. In the 100 school districts with the largest number of Indian students, there were 6 school districts that reported a total of 17 schools as having both charter and magnet status. This did not affect our analysis of school districts with school choice options because each of those 6 districts had at least one additional school that had only charter status and at least one additional school that had only magnet status in school year 2015-16. We excluded special education schools, alternative/other schools, and schools flagged as state-operated juvenile justice facilities from our data analysis because those schools limited enrollment and could not be classified as a choice. We did not consider virtual schools in our analysis because, as defined in the CCD, these schools generally do not have a physical facility, which limits the ability to ascribe a virtual school to a specific location or school district. Similar limitations would apply to studying homeschooling or non-public online educational options, which are not captured in the CCD. We also excluded schools that were reported closed, inactive, or not yet opened in 2015-16. As noted previously, we focused our analyses on (1) school districts with high percentages of Indian students and (2) the 100 school districts with the largest number of Indian students. In school year 2015-16, there were 453 school districts with high percentages of Indian students. However, in our analysis we found one school district with a high percentage of Indian students that did not offer any traditional, charter, magnet, career and technical education, or Bureau of Indian Education schools, and one school district that offered one magnet school, but no other schools. We excluded these two districts from our analysis because they did not offer any choice as described above. After excluding these two districts, there were 451 school districts with high percentages of Indian students in our analysis. In total, and after accounting for overlap among school districts that had both high percentages and large numbers of Indian students, our analysis included 259,033 students—51 percent of all Indian students attending public schools in school year 2015-16—across 504 school districts. We did not consider private schools in our analysis. Education collects biennial data on private schools through its Private School Universe Survey (PSS), which we determined was a reliable dataset for describing aggregate data on the total number of Indian students that attended private schools in school year 2015-16. However, we determined the data were not sufficiently reliable for analysis of the specific locations of private schools. Unlike the CCD which captures data on the universe of public schools, the PSS is based on a sample of private schools, according to Education’s PSS survey documentation. The official in Education’s National Center for Education Statistics (NCES) who is responsible for the PSS told us that the PSS sample captured only about half of the private schools in the nation. We further explored using the broader list of private schools from which Education draws the PSS sample, however the PSS documentation showed that this list contained more than 10,000 entities—or 30 percent of the entire list—that were not private schools. We confirmed this information with the same NCES official. Based on our review of the PSS documentation, as well as our discussions with cognizant NCES officials, we determined that it would not be possible to use the PSS data to comprehensively and accurately map the locations of these private schools nationally or in specific areas with large Indian student populations. To analyze school choice options in school districts with large Indian student populations, we analyzed all relevant schools within the public school district’s geographic boundary regardless of the administrative school district it was assigned to in the CCD. This allowed us to account for all public schools and BIE schools in a given area that could be an option for Indian students. It was necessary because, for example, charter schools or BIE schools are sometimes recorded in the CCD as their “own district,” i.e., separate from the public school district for a given area because of the local public school administrative structure. We further examined school choice based on a school district’s location in urban and rural areas. The CCD collects location data using classifications ranging from large cities to remote rural areas. For analysis, we collapsed these classifications into two categories, consistent with Education’s analyses: (1) urban areas, i.e., locations classified as cities or suburbs, and (2) rural areas, i.e., locations classified as towns or rural. Appendix II: Additional Maps This appendix contains maps of selected regions of the country to provide a more in-depth view of the school choice options available in school districts in which American Indian and Alaska Native students accounted for 25 percent or more of all students in the district. Appendix III: Comments from the Department of Education Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Bill Keller (Assistant Director), David Watsula (Analyst-in-Charge), Susan Aschoff, James Bennett, Deborah Bland, Connor Kincaid, Jean McSween, John Mingus, James Rebbe, and Leanne Violette made key contributions to this report. Related GAO Products Private School Choice: Requirements for Students and Donors Participating in State Tax Credit Scholarship Programs. GAO-18-679. (Washington, D.C.: September 18, 2018). Broadband Internet: FCC’s Data Overstate Access on Tribal Lands. GAO-18-630. (Washington, D.C.: September 7, 2018). Native American Youth: Involvement in Justice Systems and Information on Grants to Help Address Juvenile Delinquency. GAO-18-591. (Washington, D.C.: September 5, 2018). High Risk: Agencies Need to Continue Efforts to Address Management Weaknesses of Federal Programs Serving Indian Tribes. GAO-18-616T. (Washington, D.C.: June 13, 2018). Private School Choice: Federal Actions Needed to Ensure Parents are Notified about Changes in Rights for Students with Disabilities. GAO-18-94. (Washington, D.C.: November 16, 2017). Tribal Transportation: Better Data Could Improve Road Management and Inform Indian Student Attendance Strategies. GAO-17-423. (Washington, D.C.: May 22, 2017). School Choice: Private School Choice Programs Are Growing and Can Complicate Providing Certain Federally Funded Services to Eligible Students. GAO-16-712. (Washington, D.C.: August 11, 2016). Indian Affairs: Bureau of Indian Education Needs to Improve Oversight of School Spending. GAO-15-121. (Washington, D.C.: November 13, 2014). Indian Affairs: Better Management and Accountability Needed to Improve Indian Education. GAO-13-774. (Washington, D.C.: September 24, 2013).
Why GAO Did This Study Education refers to school choice as the opportunity for students and their families to create high-quality, personalized paths for learning that best meet the students' needs. For Indian students, school choice can be a means of accessing instructional programs that reflect and preserve their languages, cultures, and histories. For many years, studies have shown that Indian students have struggled academically and the nation's K-12 schools have not consistently provided Indian students with high-quality and culturally-relevant educational opportunities. GAO was asked to review K-12 school choice options for Indian students. This report examines the public school options located in areas with large Indian student populations. GAO used Education's Common Core of Data for school year 2015-16 (most recent available) to analyze public school choice in (1) school districts in which Indian students accounted for 25 percent or more of all students (i.e., high percentages of Indian students) and (2) the 100 school districts with the largest number of Indian students. GAO also interviewed federal officials, relevant stakeholder groups, and tribal leaders to better understand school choice options for Indian students. What GAO Found Few areas provide American Indian and Alaska Native students (Indian students) school choice options other than traditional public schools. According to GAO's analysis of 2015-16 Department of Education (Education) data, most of the school districts with Indian student enrollment of at least 25 percent had only traditional public schools (378 of 451 districts, or 84 percent). The remaining 73 districts had at least one choice, such as a Bureau of Indian Education, charter, magnet, or career and technical education school (see figure). Most of these 451 districts were in rural areas near tribal lands. Rural districts may offer few school choice options because, for example, they do not have enough students to justify additional schools or they may face difficulties recruiting and retaining teachers, among other challenges. Some of the 100 school districts with the largest number of Indian students were located in large urban areas, such as New York City, and the majority (62) offered at least one option other than a traditional public school, according to GAO's analysis. The most common option was a charter school. However, because Indian students often account for a small percentage of all students in these districts, Indian education experts GAO interviewed said that the schools are less likely to have curricula that reflect Indian students' cultural identity or provide instruction on Native languages—things that tribes and experts consider crucial to strengthening, rebuilding, and sustaining Indian cultures and communities. Also, even when Indian students had more options, no consistent enrollment patterns were evident. Whether Indian students enrolled in different types of schools could be a function, in part, of differences in state school choice laws and the extent to which these schools offered curricula that reflect Indian languages, cultures, or histories, according to Indian education experts. What GAO Recommends GAO is not making recommendations in this report.
gao_GAO-19-115
gao_GAO-19-115_0
Background Federal and State Roles in Addressing SNAP Fraud The goal of SNAP, formerly known as the federal Food Stamp Program, is to help low-income individuals and households obtain a more nutritious diet by supplementing their income with benefits to purchase allowed food items. The federal government pays the full cost of the benefits and shares the responsibility and costs of administering the program with the states. The overarching rules governing SNAP are set at the federal level. Accordingly, FNS is responsible for promulgating program regulations and ensuring that state officials administer the program in compliance with program rules. FNS officials in seven regional offices assist headquarters officials in this oversight work. FNS also determines which retailers are eligible to accept SNAP benefits for food purchases and investigates and resolves cases of retailer fraud. The states, or in some cases counties, administer the program by determining whether households meet the program’s eligibility requirements, calculating monthly benefits for qualified households, and issuing benefits to participants on an electronic benefit transfer (EBT) card. States are also responsible for investigating possible violations by benefit recipients and pursuing and acting on those violations that are deemed intentional. Types of SNAP Fraud and State Anti-Fraud Mitigation Strategies Intentional program violations include acts of fraud, which involve obtaining something of value through willful misrepresentation. Eligibility fraud involves individuals making false or misleading statements in order to obtain benefits, including statements about household composition, household expenses, and income. Failing to report changes to household circumstances that may affect benefits can also result in eligibility fraud under certain circumstances. When recipients are certified for SNAP, state agencies assign them to a reporting system for notifying the state of certain changes. These changes include when they have a change of address, both in-state or out-of-state. Some systems require recipients to report within a certain period of time of the change occurring, often within 10 days. Other reporting systems– including simplified reporting – require recipients to submit reports periodically. Households subject to reporting on a periodic basis must generally submit reports not less often than once every 6 months. One type of eligibility fraud is dual participation, in which a recipient receives benefits in more than one state in the same month. Another type of SNAP fraud is trafficking, in which benefits are exchanged for cash or non-food goods and services. Trafficking may occur when recipients collaborate with retailers who pay cash for SNAP benefits. For example, a retailer might allow a recipient to charge $100 on his or her EBT card and then pay the recipient $50 instead of providing food. Trafficking also occurs when a recipient exchanges an EBT card and the corresponding Personal Identification Number (PIN) for cash or non-food goods or services (e.g., rent or transportation) from another individual. According to a September 2012 USDA Office of Inspector General (OIG) report, the magnitude of program abuse due to recipient fraud is unknown because states do not have uniform ways of compiling such data. OIG recommended that FNS determine the feasibility of creating a uniform methodology for states to calculate their recipient fraud rate. In 2014, FNS responded that it would be infeasible to implement the recommendation as it would require legislative authority mandating significant state investment of time and resources in investigating, prosecuting, and reporting fraud beyond current requirements. States must adhere to various federal requirements for detecting SNAP recipient fraud, conducting investigations, and providing due process prior to disqualifying recipients from participating in the program. The household is responsible for repaying ill-gotten or misused benefits. States may generally retain 35 percent of the fraudulent benefits they recover, and the rest are returned to the federal government. Data Analytics The use of data analytics enables the discovery and communication of meaningful patterns in data so that states can determine which potential SNAP fraud cases to review in detail. States have access to various types of data in their case management systems, including recipient-provided information and benefits data collected throughout the SNAP eligibility determination process. Other information sources available to states include transaction data collected by EBT processors, data from previous fraud investigations, and third-party data from other government agencies or commercial vendors (see fig. 1). Data-analytics activities can include a variety of techniques to prevent and detect fraud, including data matching and data mining. Data matching is the large scale comparison of records and files to detect errors or incorrect information. It can be used to verify information provided by recipients or detect unreported changes. Data mining is the use of automated computer algorithms to detect otherwise hidden patterns, correlations, or anomalies within large data sets indicative of potential fraud, thus assisting programs in recovering these dollars (see fig. 2). Federal laws and regulations require states to conduct certain data matches when an application for benefits is submitted and other times to verify an individual’s reported employment and immigration status, as well as to ensure the information provided is not for an individual who is incarcerated, deceased, or disqualified from the program (see table 1). GAO’s Fraud Risk Framework identifies the following leading practices to help managers effectively use data to mitigate the likelihood and impact of fraud (see table 2). While these leading practices can help managers design and implement effective data-analytic tools and techniques to prevent and detect potential fraud, as discussed in the Fraud Risk Framework, these techniques alone may not be sufficient to ensure that ineligible individuals do not fraudulently enroll in a program or receive benefits. As a result, managers may need to combine data-analytics activities with additional controls as part of their efforts to combat fraud, in a strategic, risk-based manner. SNAP Transaction Data from Selected States Show Relatively Few Households with Out- of-State Purchases Indicating Potential Fraud Out-of-State Purchases Are Allowed by SNAP Rules and Their Dollar Value Represents a Small Percentage of Purchases A relatively large number of SNAP households made purchases outside their home state, as allowed under the SNAP statute, but the total dollar value of out-of-state purchases was small compared to SNAP purchases overall, according to our analysis of FNS SNAP transaction data. We identified approximately 5.5 million households that made out-of-state SNAP purchases in fiscal year 2017. In comparison, FNS reported that the monthly average number of SNAP households was approximately 21 million in fiscal year 2017. Out-of-state purchases made up approximately 3 percent of all SNAP benefits in fiscal year 2017, with a total dollar value of about $2 billion (see fig. 3). Out-of-state purchases may occur for different reasons, one of which may be because a recipient lives on or near a state border, and regularly shops across the state line. For example, District of Columbia recipients spent about half of their SNAP benefits out of state in fiscal year 2017. All District of Columbia residents are in close proximity to both Maryland and Virginia, which are no more than approximately 7 miles from any point in the District. In general, about a third (34 percent) of households nationwide with out-of-state purchases spent $50 or less on those purchases in fiscal year 2017. See Appendix II for a detailed listing of out- of-state purchases by state. Out-of-state purchases may also indicate potential program violations, including eligibility fraud or trafficking. However, because out-of-state purchases are permitted, analysis of additional household and transaction information is generally needed to identify potential fraud, as discussed below. Of out-of-state transactions, purchases in a state that did not border the recipient’s home state (non-border state) made up approximately 1 percent of all SNAP benefits in fiscal year 2017, as shown in figure 3 above. There were 2.2 million SNAP households that made at least one purchase in a non-border state in fiscal year 2017, and the percent of SNAP benefits spent in a non-border state in that year ranged between approximately 0.6 percent and 1.9 percent. In fiscal year 2017, states whose SNAP recipients spent the highest percentage of their SNAP benefits in non-border states included Colorado, Hawaii, Montana, North Dakota, and Rhode Island. SNAP Purchases in Non- Border States Raise Questions of Residency for a Relatively Small Percentage of Households in Selected States Overall, we found that for fiscal year 2017, less than 0.5 percent of households in our three selected states spent all their SNAP benefits for the entire fiscal year in a non-border state (see table 3). Use of benefits in stores that are a long distance from a recipient’s residence for extended periods of time, such as purchases exclusively in non-border states over multiple months, could be an indicator of program violations, including eligibility fraud. The total value of SNAP transactions by households in our three selected states that made all purchases in non- border states in fiscal year 2017 was approximately $1.9 million. These purchases represent about 0.1 percent of all SNAP benefits for fiscal year 2017 in the three selected states. When SNAP benefits are used in a non-border state over an extended period of time, this could indicate possible intentional program violations such as an unreported move and other household changes that could impact eligibility. SNAP officials we interviewed said that in some cases a recipient may delay reporting a move if they are enrolled in SNAP in a state with a lower barrier to entry to the program. At the same time, the rules around reporting a move and residency may make it difficult to determine when a recipient has violated program rules. Recipients are not required to immediately report a move in some cases due to simplified reporting rules that allow a recipient to report household changes only periodically, generally every 6 months. Also, officials we interviewed in the three selected states told us that there are no set time limits for a SNAP recipient to reside in a new state before the former state revokes the recipient’s residency. For example, a recipient may be out of state for an extended period of time for personal reasons, such as helping a relative, but still intend to reside in the state where they are enrolled in SNAP. In that case, according to state officials, the recipient would not necessarily need to report a move and may not be violating program rules. In addition to the program violations related to an unreported move, use of SNAP benefits in a non-border state over extended periods of time could bring into question whether a recipient is also enrolled in SNAP in another state (i.e., dual participation). Also, it may indicate changes in the household that could impact eligibility, including questions about whether a recipient is earning unreported income in the state where they are using their benefits. While state SNAP agencies stated that they conduct data matching meant to detect dual participation and unreported income, states also noted challenges with these matches. State agencies told us that they use the PARIS system to detect possible dual participation, and both NDNH and the Work Number to identify recipient income. However, challenges officials cited in using these systems included lags in the data provided, and additional work required to confirm data. The use of data analytics to review recipient transaction data may help states identify suspicious household activity more easily than with data matching alone given the challenges associated with these systems. In addition, data analytics may be another tool to help states identify suspicious activities in a timely manner. Given the possibility for eligibility fraud or other program violations, we plan to refer the households that our data analysis identified as spending all benefits in a non-border state to their respective state SNAP agencies for further investigation. Selected Households’ Out- of-State and In-State SNAP Purchases Had Similar Levels of Potential Trafficking Based on our analysis of fiscal year 2017 transaction data in the three selected states, we found that SNAP households without out-of-state purchases were generally just as likely to have made the types of purchases that may indicate trafficking of benefits as households with out- of-state purchases. Overall, we found that approximately 2 percent of all households in the three selected states, including both households that shopped out-of-state and those that shopped in state only, had a high number of purchases potentially indicative of SNAP trafficking. However, for two selected states, there was little to no difference in the percentage of households with this activity when we compared households that only shopped in their home state and households that shopped out-of-state. For one state, a greater percentage of households that shopped out-of- state had purchases indicative of SNAP trafficking, but households in this state also had different shopping patterns in general, as discussed below. In addition, for households that shopped out-of-state, few of the transactions we flagged as indicators of potential trafficking occurred outside the home state. Although we found that rates of trafficking indicators were generally similar between households that shopped out- of-state and those that only shopped in their state of residence, the analysis of transaction data for other factors may allow states to identify households at risk of trafficking and make them a higher priority for investigation. Our prior work reported on the benefits of SNAP transaction data analysis for this purpose. Specifically, we found that for North Dakota and Washington, households that made one or more purchases out of state had similar rates of purchases flagged for potential trafficking compared to households that shopped only in their home state. This held true both for households that only shopped in border states, as well as for households that shopped in non-border states (see table 4). For example, 1.4 percent of Washington SNAP households that only shopped in their home state had purchases resulting in 20 or more trafficking flags in fiscal year 2017, and 1.8 percent of Washington households that also shopped in border states had 20 or more trafficking flags. For Washington households that also shopped in non-border states, 1.5 percent made purchases resulting in 20 or more flags. Our analysis of District of Columbia households identified higher rates of potential trafficking indicators for households that shopped out-of-state, compared to the other two selected states. Specifically, 1.4 percent of District of Columbia SNAP households that only shopped in their home state had purchases resulting in 20 or more trafficking flags in fiscal year 2017, and 5.7 percent of households that also shopped in border states had 20 or more trafficking flags. For District of Columbia households that also shopped in non-border states, 8 percent made purchases resulting in 20 or more flags. However, the difference in rates for District of Columbia trafficking indicators may reflect the different shopping patterns of its households when compared to other states. As stated previously, District of Columbia households made about half of their SNAP purchases out-of- state, which is a significantly higher amount compared to any other state. And all District of Columbia households are in close proximity to the bordering states of Maryland and Virginia, approximately 7 miles or less. Also, a small percentage of District of Columbia households shopped only in their home state in fiscal year 2017—approximately 7 percent of all households reviewed. In comparison, approximately 62 percent of North Dakota households, and 76 percent of Washington households made all purchases in their home state. For the households in North Dakota and Washington that shopped out-of- state in fiscal year 2017, we found that most transactions indicating potential trafficking occurred in the recipient’s home state rather than out- of-state (see fig. 4). District of Columbia households were the exception and most transactions indicating potential trafficking occurred in the recipient’s home state or in a border state. However, the pattern of trafficking flags also aligns with where District of Columbia SNAP recipients tend to shop, given that approximately half of their SNAP purchases were made in border states in fiscal year 2017. While we identified households in selected states with out-of-state purchases that indicated potential trafficking, identifying such households required additional data analysis of factors beyond purchase location. Analysis of additional data elements may allow states to better identify potential trafficking requiring investigation. We found out-of-state purchase information alone is of limited benefit to identify SNAP households that may be engaged in trafficking. Some Selected States Reported Using Data Analytics Beyond Required Data Matching and Cited Advantages As Well As Organizational and Resource Challenges Selected States Reported Doing Required Data Matching, and Some Reported Conducting Additional Data Analytics Officials we interviewed in all seven of the states we selected for review of use of data analytics reported conducting federally required data matching to verify information provided by households when they initially apply or recertify for SNAP benefits. Federal law and regulations require states to conduct certain data matches when determining SNAP eligibility, including matches that provide information on people who may be incarcerated, deceased, or disqualified from receiving SNAP benefits due to intentional program violations. The five databases that state SNAP agencies are required to conduct matches against when determining SNAP eligibility are the Department of Health and Human Services’ (HHS) National Directory of New Hires, the Social Security Administration’s (SSA) Prisoner Verification System, SSA’s Death Master File, U.S. Citizenship and Immigration Services’ Systematic Alien Verification for Entitlements and FNS’s Electronic Disqualified Recipient System (eDRS). As we previously reported, state SNAP agencies use data matching to obtain information about households’ income, verify information provided by households, or identify potential discrepancies. Specifically, agencies are required to verify household data electronically by matching their data with specific government sources and have the option to match against additional data sources. In addition to the required data matching, officials we interviewed in all seven selected states also reported conducting other data matching with a range of internal and external data sources. These matches used information from federal, state, and commercial data sources on earned income from employment or self-employment or unearned income from other government benefit programs. According to state officials, these sources included Unemployment Insurance information from state workforce agencies, the PARIS file from HHS, and The Work Number, a commercial verification service. Other sources that could be used include Old-Age, Survivors, and Disability Insurance income information and Supplemental Security Income information from multiple data matches with the SSA. In addition to verifying applicants’ initial eligibility, data matching can identify changes in key information that could affect continued eligibility. Beyond data matching, officials in all seven selected states said that they had access to EBT reports notifying them of suspicious transactions, although the type and frequency of use of these reports varied. For example, while some state officials said that they manually generated reports on an ad hoc basis, other state officials said that they had automated reports that they received and reviewed on a weekly or monthly basis. As we previously reported, automating data analytics tests can allow agencies to monitor large amounts of data more efficiently than with manual tests. Furthermore, officials in all seven selected states reported that they had examined out-of-state transactions to some extent. Some states had access to out-of-state reports as part of their suite of EBT reports but did not review them often, while other states automatically received alerts if households consistently used benefits out of state over a certain extended period of time, such as 70 or 90 days. For example, officials from Massachusetts told us that they flag certain transactions to help ensure recipients comply with the state’s residency requirements for eligibility. Specifically, after a client spends their benefits out of state for 70 days or more, the state agency will send a letter asking the client to prove they are still a Massachusetts resident. Officials generally reported that tracking out-of-state transactions was most useful for finding potential dual participation—a household receiving benefits in two or more states. Officials we interviewed in five of seven selected states reported conducting further, more sophisticated data analytics involving data mining—the active and recurring monitoring of EBT transactions using algorithms to detect and flag transactions that indicate potential recipient fraud, often on a real-time or near real-time basis. For example, officials told us that these states—the District of Columbia, Massachusetts, Mississippi, Washington, and Wisconsin—examined a range of indicators of potential recipient fraud. Some of the five selected states automated their data mining to monitor data for potential fraud indicators on a continuous, real-time basis. In addition to data mining, some of these five states reported using other more advanced data analytics techniques, including mapping analysis and a form of predictive analysis to identify SNAP purchases that could indicate trafficking. For example, officials in the District of Columbia reported using location mapping to identify households that spent their benefits long distances from home. Officials we interviewed in Wisconsin reported developing an automated check intended to flag particular types of case characteristics indicative of potential fraud. According to the Wisconsin officials, if a particular case is flagged, a caseworker must follow up and provide extra scrutiny before the case can move forward in the eligibility process. As we previously reported, certain types of predictive data analytics can increase the effectiveness of anti-fraud programs by identifying particular types of potentially fraudulent behavior. Selected States That Reported Conducting Additional Data Analytics Also Employed More Leading Practices and Cited Advantages in Using Data Analytics Officials we interviewed in the five selected states that reported conducting additional data analytics—the District of Columbia, Massachusetts, Mississippi, Washington, and Wisconsin— employed more of GAO’s leading practices for data analytics than the two states that used data matching alone—New Mexico and North Dakota. Organizational and leadership support. The five states with more sophisticated data analytics techniques all reported to us that they had organizational and leadership support for those activities. GAO’s leading practices state that to be effective, data-analytics initiatives need support across the program and, in particular, from program managers. Officials in these states cited support from executive and legislative state leadership for the use of data analytics to combat SNAP recipient fraud. For example, officials in Wisconsin reported that the governor’s office worked to centralize the agency’s data- analytics activities and support infrastructure to improve business processes. Officials in Mississippi told us that the state’s executive leadership fully supports the use of data to combat SNAP recipient fraud and that the state legislature in 2017 passed a law to assist in the identification of waste, fraud, and abuse. Pursue external data. These states also reported to us that they were able to obtain external data necessary for their data analytics activities. For example, officials in Mississippi told us that they interface with an array of data sources, including the National Accuracy Clearinghouse, the state Department of Employment Security, and the state Department of Education, among others. GAO’s leading practices state that using data from other federal agencies or third-party sources can help managers identify potential instances of fraud. As we mentioned previously, the states that reported conducting additional matching beyond that required by federal law and regulation also reported using an array of federal, state, and third-party sources for these data matches. Consider program rules or previously encountered schemes. These five states also reported that they considered program rules and known or previously encountered fraud schemes to help design their data analytics practices, another of GAO’s leading practices for data analytics. These leading practices note that by using information on previously encountered fraud schemes or known fraud risks, managers can identify signs of fraud (i.e., red flags) that may exist within their data. For example, two states reported that they change their data analytics techniques in response to changing patterns of fraud. All five selected states that reported conducting additional data analytics practices beyond data matching cited a number of associated advantages, including increased efficiency and effectiveness of their anti- fraud efforts. Automating fraud detection. All five states reported that data analytics provided the advantage of automating the detection of potentially fraudulent activity. For example, officials in Mississippi noted that a new investigation management system implemented in their state would use algorithms to detect potential fraud and automatically generate flags, whereas in the past they had to examine transactions manually. Financial savings. Four states reported that data analytics had the advantage of financial savings through the collection of overpayments and the closure of cases. For example, officials in Washington said that its data matching activities saved millions of dollars through the closure of cases. Officials in Mississippi reported that its overpayment collections increased $2 million since moving to a new investigation management system a few years ago that incorporates more data analytics techniques. Prioritizing and enhancing investigations. Four states reported that data analytics helped them prioritize and enhance fraud investigations. For example, officials in Washington said that they had a system in place that used an algorithm to rank each fraud referral based on a number of factors and moved higher-risk referrals to the top of the list of investigations. Officials in Wisconsin said that they combined eligibility, transaction, and retailer data and analyzed it to produce a prioritized list of individuals who appeared most likely to have trafficked at a specific retailer, allowing them to focus their investigative resources on cases most likely to be fraud. Preventing fraud. Finally, two states reported that data analytics had the advantage of improving the return on investment of anti-fraud activities through the prevention of fraud before it occurs. For example, officials in Wisconsin estimated that data analytics has helped them prevent a large proportion of fraud before it occurs, thereby improving the cost-benefit of their anti-fraud practices. Officials in Mississippi noted that data analytics can be an effective deterrent. Selected States Reported Organizational and Resource Challenges in Effectively Using Data Analytics Officials we interviewed in all seven selected states reported a range of organizational and resource challenges that either prevented them from using more advanced data analytics techniques or made their current data analytics practices difficult to implement. Quantifying benefits of data analytics. Officials we interviewed in two states said it was challenging to quantify the benefits of data analytics, therefore resulting in a lack of sound evidence for supporting the utility of this type of work. For example, officials in Washington reported that it was difficult to conduct a cost-benefit analysis of data analytics because of the challenge of quantifying how often fraud is prevented before it occurs. Officials in Wisconsin reported that it attempted to measure future savings from fraud prevention but that there is no guidance for how to determine these savings. Obtaining organizational support. Officials in two states reported that it was challenging to obtain sufficient organizational support for conducting data analytics. For example, officials in North Dakota reported that they could not say how much support exists in the state government to pursue additional resources for data analytics. Those in the District of Columbia noted that it is sometimes difficult to convince certain employees of the need for data analytics to detect fraud. Appearing to criminalize legitimate use. Officials in three states said that a challenge to using more advanced data analytics was that it could appear to profile recipients or make it appear to the general public and to policy-makers that certain legitimate uses of SNAP benefits, such as using benefits out-of-state, were not allowed. For example, Washington tracked the number of replacement EBT cards as a possible indicator of fraud, but officials said that there were many cases in which the client had legitimate reasons for needing a high number of replacement cards, such as mental health issues or homelessness. Washington officials further noted the challenge of using demographic data in a predictive model, reporting that it puts them at risk of profiling even though it can be helpful. For example, when they examined recipients with high balances on their EBT cards, demographic information provided an explanation. In particular, elderly individuals were being frugal with their benefits. Dealing with changing patterns of fraud. Officials we interviewed in three states said that a challenge to using data analytics was dealing with changing patterns of fraud. They said that the characteristics of transactions that may indicate potential fraud are constantly changing as fraudulent actors change their tactics in response to state enforcement. For example, officials in Mississippi said that recipients committing fraud might change from high-dollar to low-dollar transactions, in which case the state would need to adjust its monitoring accordingly. Obtaining necessary data. Officials we interviewed also reported challenges with obtaining data needed to conduct data analytics. Officials in three states said that simplified reporting presents a challenge to using data analytics to detect potential recipient fraud. Specifically, simplified reporting made it challenging to use certain information as potentially indicative of fraud because recipients are not required to report certain changes—for example, a move out of state—until it is time for them to recertify for benefits. In addition, officials in three states reported a challenge in verifying necessary data in order for them to be considered reliable for use. For example, Massachusetts reported that one of the biggest challenges of developing investigative leads through data analytics is that not all data are considered equally reliable. For SNAP, FNS guidance defines some data matches as “verified upon receipt” if the match is with a primary or original source of the data (such as information on a government benefit provided by the administering agency, such as SSA). Eligibility workers can use this information without taking additional steps to verify that the data are accurate, according to FNS guidance. In contrast, data from a secondary source, defined in the guidance as not being verified upon receipt, require additional verification before the state agency can take action on an eligibility determination. High costs and resource demands. Officials in six selected states cited the high costs and resource demands of using advanced data analytics techniques. For example, officials we interviewed in North Dakota, which conducted only data matching, said that they lacked the funding and staff resources to use more advanced techniques. Officials we interviewed in New Mexico noted that they lacked the staff resources to use data analytics. Officials from North Dakota said that they had the option to procure a data analytics tool, but said that the costs were prohibitively high. Officials in Wisconsin, which was employing more data analytics, said that they were not able to purchase access to a third-party data source using SNAP funding alone, and that they had to seek funding from another federal program in order to afford these efforts. FNS Supported Certain States in Adopting Leading Practices for Data Analytics, but Assistance and Information Sharing Has Been Limited FNS Helped Some States Adopt Certain Leading Practices for Data Analytics FNS provided individualized assistance and training to several states across the country to build their capacity for data analytics on SNAP, consistent with several of GAO’s leading practices. FNS provided assistance through grants, pilot projects, and training at conferences. The pilot projects also informed FNS’s early efforts to help states improve their fraud prevention, detection, and investigation processes using data analytics. Specifically, in recent years, FNS’s assistance to states has aligned with 4 of the 10 leading practices for data analytics identified by GAO in its Fraud Risk Framework. Ensure Employees Have Sufficient Knowledge, Skills, and Training In fiscal years 2014 through 2017, FNS conducted a 10-state pilot project to identify and test promising practices in state fraud prevention and detection. As part of the project, each participating state received training and technical assistance in the use of data analytics, in addition to a review of its business processes. For example, officials from Utah, who participated in the pilot, said that FNS provided training to them on mining social media data. The officials added that the timing of the training was excellent because the state was beginning to build its capability for data analytics on its own. They said that their data analytics team has incorporated what they learned during the pilot and use various data analytic techniques every month. As a result, according to officials, the state’s overpayment collections increased. In fiscal years 2014 and 2015, FNS awarded nine Recipient Trafficking Prevention Grants and five Recipient Integrity Information Technology Grants to a total of 13 states, some of which funded training and staff to perform SNAP data analytics. For example, in fiscal year 2014, Kentucky received a grant to purchase and receive training on an analytic tool with the ability to analyze data and capture posts coming from various social media sites. In fiscal year 2015, Alaska received a grant that included 3 months of training related to the installation of the state’s new fraud case management system that, among other things, would provide real-time data and automate manual processes to detect fraud and track cases. According to Alaska’s grant application, this would allow the state to devote more time to investigations, prosecutions, recoupment, and analysis and increase the number of completed investigations. State officials we interviewed said that they also gained data analytics knowledge and skills from other states at conference workshops. For example, officials from North Dakota told us that they attended a conference presentation in which officials from another state discussed a performance measure that is designed to assess the savings associated with detecting SNAP fraud. Combine Data Across Programs Within the Agency FNS has provided grant funding and training to some states to help them combine data from different databases within the state to facilitate SNAP data analytics. For example, FNS’s fiscal year 2015 information technology grants helped five states develop centralized data systems and consolidate data from multiple outdated systems. Nevada received a grant to fund the acquisition of a new data system that, according to its grant application, would combine the state’s data on known SNAP fraud cases with transaction data and third-party data sets. The data on known fraud cases would be used to continuously refine data analyses to identify similar anomalies and patterns in the transaction data. Maine used its grant to acquire a new investigation case management system that consolidates data from multiple systems in a centralized repository. Similarly, New Jersey received a grant to acquire new computer systems that, according to its grant application, will integrate SNAP case management system data with data from several of the state’s data systems, allowing investigators to perform analyses in real time. In addition to the grants, in fiscal year 2016, FNS sponsored a 5-day course on fraud detection that demonstrated how states could combine eligibility data with transaction and other data to identify potential fraud. Officials from six states participated. Pursue Access to External Data and Conduct Data Matching FNS has provided grants to assist some states in accessing and using external sources for data matching. For example, in fiscal year 2014, FNS provided recipient trafficking prevention grants to three states—Florida, Nevada, and Ohio—to update the systems that they use to match their SNAP recipients and those that have been disqualified in the state with FNS’s national database of disqualified recipients. According to FNS, each grantee state planned to use the funds to link its system with FNS’s database through the web rather than using a “batch” processing system, which will allow them to match data on applicants at the time of application or recertification rather than at specific intervals after eligibility is determined. Florida officials mentioned in the related grant proposal that using the state’s current batch processing system meant that other states did not have real-time access to information about the state’s disqualified recipients, thereby potentially increasing the chance of an ineligible individual receiving benefits. In addition, FNS administered a grant on behalf of OMB, which funded a pilot program for five southeastern states to develop the National Accuracy Clearinghouse (NAC), a data sharing system that allows participating states to identify applicants who are receiving benefits in the partnering states in near-real time. According to one state official, a primary benefit of the NAC is that it enables each participating state to match data on individual beneficiaries across five states without having to connect to five different states’ computer systems. One member of the NAC consortium from Florida said that the ability to match in near-real time is helpful because the data available in the PARIS system is older and would only identify individuals potentially receiving benefits in multiple states months after they have occurred, rather than at the time of application. As we have previously reported, data on benefit receipts is updated quarterly in PARIS. Conduct Data Mining FNS has funded pilot projects, training, and grants to assist some states in developing their capacity for data mining to identify potential fraud. FNS’s 10-state pilot to test advanced data analytics techniques included the use of data mining, among other data analytic techniques. One of the techniques involved mining recipient transaction data for households that had shopped at disqualified retailers to develop a prioritized list of retailers and recipients to investigate. According to state officials we interviewed in Wisconsin, the technique automated a time and labor intensive process that state analysts had previously performed manually. The pilot project also used other data mining techniques to develop profiles of recipients who commit fraud. For instance, in Utah, the data analysis showed that they are more likely to have multiple replacement EBT cards and make more purchases from small stores than other recipients. At the end of the pilot, FNS sponsored a training course that included detailed instruction on data mining. Although past efforts by FNS have been limited to some states and encouraged some leading practices, more recently, in May 2018, FNS released a SNAP Fraud Framework that provides more comprehensive guidance to help states adopt all of GAO’s 10 leading practices for data analytics. Specifically, FNS’s SNAP Fraud Framework provides a collection of examples, promising practices, and procedures to help state agencies with the prevention and detection of SNAP fraud that encompass all 10 data analytics leading practices from GAO’s Fraud Risk Framework. (For a comparison of the practices in the two frameworks, see appendix III.) According to FNS officials, the SNAP Fraud Framework is meant to take a holistic, integrated approach to fraud, including data analytics, but they recognize that states differ in their readiness to adopt analytics. The framework’s data analytics section provides a range of approaches, examples, case studies, and methods that allow all states to begin embedding analytics into their processes. FNS officials reported that they began conducting outreach to state officials about the framework in the summer of 2018. FNS officials said that they are also considering using grant funds to assist states with the implementation of components of the framework. Furthermore, FNS officials said that some of the potential technical assistance may include showing states how to develop their own analytic tools. FNS has also developed a maturity assessment to evaluate each state’s capacity to implement the various components of the fraud framework. It includes a state’s use of data analytics for fraud detection and investigations, and its learning and development opportunities for stakeholders who use the results of data analytics, such as investigators, hearing officials, and court officials. According to FNS officials, FNS’s regional offices will conduct maturity assessments as part of management reviews by the end of fiscal year 2018. FNS’s Assistance on Developing Data Analytics Capabilities Has Reached a Limited Number of States Although FNS has assisted some states in developing their data analytic capabilities, the methods it has used to do so were meant to reach only a limited number of states. Specifically, much of FNS’s direct assistance to states came in the form of pilot projects, competitive grants, or conferences. According to officials, FNS is in the early stages of promoting states’ use of data analytics for SNAP fraud prevention and detection, and its efforts have focused on assessing the current capacity of states to use data analytics and determining analytic practices that are effective. Furthermore, FNS’s efforts generally had specific end dates and did not provide ongoing assistance to reach a broader group of states and provide them with the knowledge and tools to develop and maintain their data analytics efforts. (See table 5 for more information on the reach of FNS’s direct assistance efforts.) Although FNS provided some training on using data analytics, it was not conducted on a recurring basis, and state officials we interviewed expressed concerns about their access to information on successful data analytics approaches. Officials we interviewed in five of our seven selected states said that they attended FNS conferences that provided training in data analytics and participated in regional discussions on the topic; however, these events were provided occasionally and limited to states within the region. State officials said that participating in conferences in which they could learn from other states’ experiences was particularly helpful, and they wanted more opportunities to do so. State officials also told us that it would be beneficial if FNS took a more active role in disseminating states’ successful practices, particularly with regard to data analytics. Further communications about data analytics would be consistent with federal internal control standards that call for agencies to communicate necessary quality information to external parties in order to achieve the agency’s objectives. Federal agencies can support external parties, such as state agencies, in achieving the federal agency’s objectives by sharing information on effective practices used by the program or other external parties. Furthermore, officials we interviewed in selected states most frequently cited high costs and resource demands as a challenge to using advanced data analytics techniques. Although FNS has provided some financial support to state efforts, officials in two states that we reviewed told us that they were not always able to sustain efforts beyond the life of the FNS pilot or grant. For example, officials we interviewed from Wisconsin said that FNS’s contractor for the 10-state pilot, in an effort separate from the contract, developed a tool that identified SNAP purchases made from disqualified SNAP retailers. Although the state officials found the tool to be highly efficient because it could sift through large amounts of data, the tool was only available to the state for a fee, which they said it could not afford. Similarly, officials from Washington told us that as part of a recipient trafficking prevention grant, the state was able to hire two investigators to detect potential SNAP fraud that may be occurring via social media. However, according to state officials, the state was unable to maintain the effort after the grant ended. In our prior work on establishing data analytic programs to address fraud, we noted that one way to handle resource challenges is to identify opportunities that leverage a program’s existing capabilities. In September 2016, GAO convened a forum of data-analysis experts to discuss considerations for entities establishing and refining data analytics programs, during which the costs of such programs were raised. Panelists, which included officials from FNS, noted that in developing a data analytics program, an entity should consider ways of leveraging resources throughout the entity. For example, panelists suggested that an entity could improve its data analytics group by combining a data warehouse from one department with existing statistical software from another and incorporating it with its current fraud-prevention system. The forum also suggested that a data analytics group should look across the agency to find staff that may have an interest or experience in working with data. Panelists noted that such efforts may be improved by seeking staff from a diverse set of positions and perspectives, including auditors, evaluators, investigators, and attorneys. Similarly, some state officials we interviewed shared creative ways to leverage existing resources. For example, officials from Florida and Wisconsin stated that they were able to leverage recovered funds from other programs to purchase access to a commercial database that matches eligibility data for individuals across related programs. In Mississippi, officials said that they used SNAP transaction data to identify individuals living out of state and then determine whether those individuals were ineligible for both SNAP and other assistance programs. By combining data and analyses across two programs, the state officials said that they were able to close more cases and significantly increase cost savings. However, other state officials noted that leveraging resources, especially data, poses challenges that states will need to learn how to resolve. Specifically, some states reported facing problems sharing data across different systems and with restrictions on sharing sensitive personal information. For example, officials representing four states from the American Association of SNAP Directors (AASD) told us that, for states to leverage data, SNAP states’ data systems need to be integrated across states. However, in their view, the cost of integration may exceed the benefits from integrating the data. In addition, state officials said that in order to leverage personal data, some states as well as programs in the same state will need to reach agreements that define how data will be extracted and used while protecting privacy. For example, a Wisconsin official told us that its data analytics group has difficulty acquiring data across programs within the state because of confidentiality and privacy rules as well as the difficulty of reaching data-sharing agreements with other programs. Moving forward, FNS’s SNAP Fraud Framework, combined with its maturity assessment, will form the core of FNS’s efforts to assist states with data analytics in a broad-based, systematic manner. According to FNS officials, the agency will be conducting outreach to states about the fraud framework and assessing both states’ capacities in data analytics and barriers to gaining the necessary knowledge and tools for developing and maintaining those efforts. Conclusions To ensure that SNAP funds are used for the purposes for which they were intended, both the federal government and state agencies should have appropriate controls for detecting and addressing fraud. The use of data analytics, such as mapping and predictive analysis, may help SNAP agencies increase program integrity and improve administrative efficiency. Data mining and data matching techniques can help identify potential SNAP fraud, and predictive models can help identify characteristics of SNAP traffickers. Our use of analytics on SNAP out-of- state transaction data from three selected states identified only slight differences between those households who shopped out of state and those who did not, suggesting that analyses of other data elements that have been shown to be indicative of potential trafficking may allow states to better identify potential trafficking and, thereby, better target resources. Although FNS has efforts underway to promote the use of data analytics to improve SNAP fraud detection through its fraud framework and maturity assessment, officials in our selected states cited challenges with accessing and maintaining needed resources such as staff, technology, and tools. While these challenges may limit states’ ability to implement data analytics, some of our selected states have successfully overcome such challenges to implement or enhance data analytics programs. For example, two states described leveraging recovered funds and reinvesting them to combat fraud. Another state leveraged transaction data across two programs, resulting in financial savings and enhanced collections, which could be reinvested to combat fraud. As FNS conducts outreach to help states implement its fraud framework and uses its maturity assessment to assess states’ anti-fraud capabilities, it has an opportunity to regularly assist states with adopting advanced data analytic techniques. Based on the experiences described by state officials, finding ways that states can leverage existing resources to improve their data analytic capabilities may be an important part of any solution. In its role as the federal oversight agency, FNS is in a position to collect and widely disseminate information about those states that have built support for data analytics and leveraged existing resources to implement or expand their data analytics programs to states seeking such examples. With wider dissemination of these examples of state successes, all state SNAP agencies could be better positioned to enhance their own efforts to identify and address SNAP fraud. Recommendation for Executive Action Building on ongoing efforts, the Administrator of FNS should develop and implement additional methods to widely distribute information to state agencies on an ongoing basis about successful efforts to adopt data analytics and strategies to leverage existing data, technology, and staff resources to enhance data analytics. (Recommendation 1) Agency Comments and Our Evaluation We provided a draft of this product to the U.S. Department of Agriculture for comment. In oral comments on September 14, 2018, FNS officials from SNAP’s Program Accountability and Administration Division and the Deputy Associate Administrator for SNAP agreed with our recommendation. They noted that they have been moving in the general direction of this recommendation and would build on current efforts to address it but noted that state readiness and technical capabilities are limiting factors in the adoption of data analytics. FNS also provided technical comments, which were incorporated into the report as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to relevant congressional committees, the Secretary of Agriculture, the FNS Administrator, and other relevant parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact us at (202) 512-7215 or LarinK@gao.gov or (202) 512-6722 or BagdoyanS@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to the report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology The objectives of this report were to review the following: (1) the extent to which SNAP households in selected states are making out-of-state purchases that may indicate potential recipient fraud; (2) the extent to which selected states are using data analytics—including those applied to out-of-state transactions—to find potential SNAP recipient fraud, and what advantages and challenges, if any, have they experienced doing so, and (3) how FNS has assisted states in implementing leading practices for data analytics for fraud detection. To address these objectives, we primarily focused on federal and state SNAP recipient anti-fraud work since the beginning of fiscal year 2015—the period which follows our August 2014 report on SNAP recipient fraud. We reviewed relevant federal laws, regulations, program guidance, and reports, and we interviewed FNS officials in headquarters and all seven regional offices to address all three objectives and obtained relevant documentation. To assess the extent that SNAP households in selected states made out- of-state purchases that may indicate potential recipient fraud, we analyzed all out-of-state purchase data nationwide and we analyzed transaction data for SNAP households in the District of Columbia and two states–North Dakota and Washington. We selected these states as they were among the top states for out-of-state spending in a non-border state in fiscal years 2015 and 2016, the two most recent years’ of SNAP data available when we started this review. We obtained SNAP transaction data from FNS for all participating households in the three selected states, and analyzed fiscal year 2017 data for households that spent all their benefits in a non-border state in that year. We also analyzed fiscal year 2017 data for all households in these three states for purchases that may indicate trafficking, based on common suspicious transaction types. We tested the transaction data for ten different suspicious transaction types that have been used by FNS and state SNAP officials to identify potential trafficking. While the transactions we flagged for potential trafficking in our three selected states are generally deemed potential indicators of fraud by SNAP officials, there could also be legitimate reasons for these purchases and they do not prove trafficking. For that reason, our analysis focused on households with a greater frequency of questionable purchases in fiscal year 2017 indicating potential trafficking—specifically purchases that resulted in 20 or more trafficking flags. We assessed the reliability of SNAP transaction data used in analyses through review of related documentation, interviews with knowledgeable officials, and electronic testing of the data, and found them to be sufficiently reliable for our purposes. To determine how selected state agencies are using data analytics to identify potential SNAP recipient fraud, we interviewed officials from seven state SNAP agencies about their efforts. We obtained related documentation when available. We selected the District of Columbia, Massachusetts, Mississippi, New Mexico, North Dakota, Washington, and Wisconsin to reflect a range of experiences based on the percentage of non-border state transactions, receipt of related technical assistance, geographic region, and FNS’s reports on their capacity to conduct data analysis. We interviewed state SNAP agency officials who oversee anti- fraud practices in each of our seven selected states. During each interview, we collected information on each state’s data analytics activities and whether they have implemented leading practices for data analytics from GAO’s Fraud Risk Framework. We also discussed the advantages and challenges of using data analytics. While information from these seven state SNAP agencies is non-generalizable, it provided illustrative examples of agencies’ efforts to use data analytics. To determine the degree to which FNS has assisted states in developing the use of data analytics, we reviewed grant documentation FNS awarded to states to help prevent recipient trafficking or improve technology used to improve program integrity. We also reviewed the terms of work for a contract FNS awarded to a private consulting firm to conduct a pilot project with 10 states during fiscal years 2014-2017, as well as reports delivered by the contractor detailing the results of the work. In addition, we reviewed a guide to data analytics that FNS developed for a 5-day training session in August 2016, as well as the data analytics “maturity assessment” questionnaire that is intended for FNS regions to use to assess the capacity of the states. We also obtained and reviewed FNS’s SNAP Fraud Framework and Supplementary Materials that was released in May 2018. After developing an inventory of how FNS has assisted states in assessing and developing its data analytic capacity, we analyzed FNS’s actions with respect to GAO’s set of leading practices for data analytics from GAO’s Fraud Risk Framework and GAO’s standards for internal control. We also analyzed FNS’s SNAP Fraud Framework to assess the degree to which it addressed GAO’s leading practices on how to use data analytics to detect, prevent, and investigate SNAP fraud. Unless specified, we reviewed only data analytic activities that occurred since the beginning of fiscal year 2015, which marks the end of our previous analysis of FNS’ anti-fraud activities concerning the SNAP program. To obtain FNS’ views, we interviewed SNAP program officials at both headquarters and at each of SNAP’s seven regional offices. To obtain a broader perspective on the use of data analytics across states, we interviewed officials representing the American Association of SNAP Directors (AASD) and the United Council on Welfare Fraud (UCOWF). AASD representatives included officials from the SNAP anti-fraud units for California, New York, Tennessee, and Texas. UCOWF representatives included officials from Florida, Louisiana, and Utah. In addition, we interviewed the Deputy Executive Director of American Public Human Services Association, AASD’s parent organization, and officials representing USDA’s Office of Inspector General. We conducted this performance audit from May 2017 through October 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence we obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Out-of-state SNAP Spending By State, Fiscal Year 2017 In fiscal year 2017, the share of SNAP benefits spent out of state varied by state from approximately 1 percent to 13 percent, with most out-of- state purchases made in a border state. States whose SNAP recipients had the highest percent of out-of-state purchases included Delaware, District of Columbia, Idaho, Nebraska, New Mexico, Rhode Island, South Dakota, Tennessee, Vermont, and West Virginia. All of these states made at least 5 percent of total purchases out of state. The states with the lowest percent of out-of-state spending by SNAP recipients included Alaska, California, Florida, Hawaii, Michigan, and Texas (see fig. 5). Detailed information on out-of-state spending by SNAP recipients, by state, is also provided in table 6 below. Appendix III: Leading Practices for Data Analytics and FNS’s 2018 SNAP Fraud Framework Comparison In May 2018, FNS released a fraud framework that provides guidance to help states adopt all of GAO’s leading practices for data analytics. The table below compares guidance in FNS’s SNAP Fraud Framework to the leading practices in GAO’s Fraud Risk Framework. Appendix IV: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the contacts named above, the following staff members made key contributions to this report: Danielle Giese and Philip Reiff, Assistant Directors; Celina Davidson and Lara Laufer, Analysts-in- Charge; Camille A. Keith; Kelly Snow; and Daren Sweeney. Also contributing to this report were Susan Aschoff, James Bennett, Alexander Galuten, James Murphy, Almeta Spencer, and Shana Wallace.
Why GAO Did This Study The federal government provided $64 billion in SNAP benefits in fiscal year 2017 to help approximately 42 million low-income individuals purchase food. SNAP is administered by FNS in partnership with states. To help reduce the risk of improper receipt or use of SNAP benefits, states use data analytics, including data matching and data mining, to identify patterns or trends indicative of potential fraud in SNAP purchases. Based on concerns about potential SNAP benefit trafficking across state lines, GAO was asked to review out-of-state transactions and states' efforts to combat such fraud. This report examines (1) the extent to which SNAP households in selected states made out-of-state purchases that may indicate potential fraud, (2) the advantages and challenges selected states have experienced in using data analytics to identify potential fraud, and (3) how FNS has assisted states in implementing leading practices for data analytics. GAO analyzed fiscal year 2017 data on SNAP purchases for North Dakota, Washington, and the District of Columbia, which had large percentages of non-border out-of-state purchases and interviewed FNS officials and officials in these states as well as in Massachusetts, Mississippi, New Mexico, and Wisconsin about their use of data analytics compared with leading practices. What GAO Found Supplemental Nutrition Assistance Program (SNAP) recipients are allowed to spend their benefits outside their state of residence, and GAO's analysis of fiscal year 2017 SNAP data in three selected states found that overall about 2 percent of households made purchases, both in state and out-of-state, potentially indicative of trafficking—the prohibited exchange of benefits for cash or nonfood goods or services. Also, GAO found little difference in potential trafficking behaviors between households that made one or more purchases out-of-state and those that shopped only in their home state. Officials in all seven states GAO reviewed said they conducted data matching. Officials in five of these states stated that they use more sophisticated data analytics including data mining to help identify potential fraud (see figure). These officials cited advantages to using more sophisticated analytics to automate fraud detection and prioritize cases, allowing them to focus investigative resources on cases most likely to involve fraud. For example, officials in Mississippi reported that overpayment collections increased $2 million since the state incorporated more data techniques into its fraud detection efforts. However, officials in all seven selected states cited factors such as high cost, resource demands, data limitations and organizational support as challenges that affect their ability to use or maintain more advanced data-analytics techniques. The U. S. Department of Agriculture's Food and Nutrition Service (FNS) has helped some states adopt certain leading practices for data analytics, but its current outreach is limited. FNS has provided assistance to some states through pilot projects, grants, and training, but, beyond a recently issued guide, FNS has done little to disseminate information more broadly about successful efforts to adopt data analytics. FNS officials said they are in the early stages of promoting data analytics for SNAP fraud prevention and detection, and their efforts have focused on assessing the current capability of states to use data analytics and determining analytic practices that are effective. State officials GAO interviewed said that training provided was helpful but expressed concern about their access to information on successful data analytic approaches. Disseminating information to states on successful strategies could help states address challenges. What GAO Recommends GAO recommends that FNS more widely disseminate information to states about successful strategies used by states to adopt data analytics. FNS agreed with this recommendation.
gao_GAO-18-231
gao_GAO-18-231_0
Background BRAC 2005 Goals The Secretary of Defense established goals for BRAC 2005 in a November 2002 memorandum issuing initial guidance for BRAC 2005 and again in a March 2004 report to Congress certifying the need for a BRAC round. Specifically, the Secretary reported that the BRAC 2005 round would be used to (1) dispose of excess facilities, (2) promote force transformation, and (3) enhance jointness. Although DOD did not specifically define these three goals, we have generally described them in prior reports as follows. Dispose of excess facilities: Eliminating unneeded infrastructure to achieve savings. Promote force transformation: Correlating base infrastructure to the force structure and defense strategy. In the late 1990s, DOD embarked on a major effort to transform its business processes, human capital, and military capabilities. Transformation is also seen as a process intended to provide continuous improvements to military capabilities. For example, the Army used the BRAC process to transform the Army’s force structure from an organization based on divisions to more rapidly deployable, brigade-based units and to accommodate rebasing of overseas units. Enhance jointness: Improving joint utilization to meet current and future threats. According to DOD, “joint” connotes activities, operations, and organizations, among others, in which elements of two or more military departments participate. BRAC Phases Congress established clear time frames in the BRAC statute for many of the milestones involved with base realignments and closures. The BRAC 2005 process took 10 years from authorization through implementation. Congress authorized the BRAC 2005 round on December 28, 2001. The BRAC Commission submitted its recommendations to the President in 2005 and the round ended on September 15, 2011—6 years from the date the President submitted his certification of approval of the recommendations to Congress. The statute allows environmental cleanup and property caretaker and transfer actions associated with BRAC sites to exceed the 6-year time limit and does not set a deadline for the completion of these activities. Figure 1 displays the three phases of the BRAC 2005 round—analysis, implementation, and disposal—and key events involving Congress, DOD, and the BRAC Commission. During the analysis phase, DOD developed selection criteria, created a force structure plan and infrastructure inventory, collected and analyzed data, and proposed recommendations for base realignments and closures. The BRAC statute authorizing the BRAC 2005 round directed DOD to propose and adopt selection criteria to develop and evaluate candidate recommendations, with military value as the primary consideration. The BRAC statute also required DOD to develop a force structure plan based on an assessment of probable threats to national security during a 20-year period beginning with fiscal year 2005. Based on the statute’s requirements, the selection criteria were adopted as final in February 2004, and the force structure plan was provided to Congress in March 2004. To help inform its decision-making process during the analysis phase, the three military departments and the seven joint cross-service groups collected capacity and military value data that were certified as accurate by senior leaders. In testimony before the BRAC Commission in May 2005, the Secretary of Defense said that DOD collected approximately 25 million pieces of data as part of the BRAC 2005 process. Given the extensive volume of requested data, we noted in July 2005 that the data- collection process was lengthy and required significant efforts to help ensure data accuracy, particularly from joint cross-service groups that were attempting to obtain common data across multiple military components. We reported that, in some cases, coordinating data requests, clarifying questions and answers, controlling database entries, and other issues led to delays in the data-driven analysis DOD originally envisioned. As time progressed, however, these groups reported that they obtained the needed data, for the most part, to inform and support their scenarios. We ultimately reported that DOD’s process for conducting its analysis was generally logical, reasoned, and well documented. After taking these plans and accompanying analyses into consideration, the Secretary of Defense was then required to certify whether DOD should close or realign military installations. The BRAC Commission assessed DOD’s closure and realignment recommendations for consistency with the eight selection criteria and DOD’s Force Structure Plan. Ultimately, the BRAC Commission accepted over 86 percent of DOD’s proposed internal recommendations; rejected, modified, or added additional recommendations; and adjusted some costs of BRAC recommendations. Implementation Phase After the BRAC Commission released its recommendations, and the recommendations became binding, the implementation phase started. During this phase, which started on November 9, 2005, and continued to September 15, 2011 (as required by the statute authorizing BRAC), DOD took steps to implement the BRAC Commission’s 198 recommendations. Also during this phase, the military departments were responsible for completing environmental impact studies to determine how to enact the BRAC Commission’s relevant recommendations. The military departments implemented their respective recommendations to close and realign installations, establish joint bases, and construct new facilities. The large number and variety of BRAC actions resulted in DOD requiring BRAC oversight mechanisms to improve accountability for implementation. The BRAC 2005 round had more individual actions (813) than the four prior rounds combined (387). Thus, in the BRAC 2005 round, the Office of the Secretary of Defense for the first time required the military departments to develop business plans to better inform the Office of the Secretary of Defense of the status of implementation and financial details for each of the BRAC 2005 recommendations. These business plans included: (1) information such as a listing of all actions needed to implement each recommendation, (2) schedules for personnel relocations between installations, and (3) updated cost and savings estimates by DOD based on current information. This approach permitted senior-level intervention if warranted to ensure completion of the BRAC recommendations by the statutory completion date. Disposal Phase The disposal phase began soon after the BRAC recommendations became binding and has continued to today. During the disposal phase, DOD’s policy was to act in an expeditious manner to dispose of closed properties. Such disposal actions included transferring the property to other DOD components and federal agencies, homeless-assistance providers, or local communities for the purposes of job generation, among other actions. In doing so, DOD has incurred caretaker and environmental cleanup costs. For example, DOD reported to Congress that, as of September 2016, the military departments had spent $735 million on environmental cleanup associated with BRAC 2005 sites, and had $482 million left to spend on BRAC 2005 sites. Overall, the military departments reported that they had disposed of 59,499 acres and still needed to dispose of 30,239 acres from BRAC 2005 as of September 30, 2016. DOD Components Generally Did Not Measure the Achievement of BRAC 2005 Goals ASD (EI&E), the military services, and 25 of the 26 military units or organizations we met with did not measure the achievement of the BRAC 2005 goals—reducing excess infrastructure, transforming the military, and promoting jointness. Specifically, a senior ASD (EI&E) official stated that no performance measures existed to evaluate the achievement of goals and the office did not create baselines to measure performance. Air Force officials stated that they did not measure the achievement of goals but that it would have been helpful to have metrics to measure success, especially as DOD had requested from Congress another BRAC round. Army officials similarly stated it did not measure the achievement of goals, noting that measuring excess capacity would have been important to help DOD get authorization for another BRAC round. Navy and Marine Corps officials said that they did not track performance measures or otherwise measure the achievement of the BRAC 2005 goals. Moreover, 25 of the 26 military units or organizations we met with stated that they did not measure the achievement of BRAC 2005 goals. The one exception in our selected sample was the command at Joint Base Charleston, which stated that it measured jointness through common output or performance-level standards for installation support, as required for installations affected by the BRAC 2005 recommendation on joint basing. By measuring jointness, officials were able to identify that the base met 86 percent of its common output level standards in the second quarter of fiscal year 2017, and it has identified recommendations to improve on those standards not met. Instead of measuring the achievement of BRAC 2005 goals, officials with ASD (EI&E) and the military departments stated that they tracked completion of the BRAC recommendations by the statutory deadline of September 2011 and measured the cost savings associated with the recommendations. Senior ASD (EI&E) officials stated that the primary measure of success was completing the recommendations as detailed by the implementation actions documented in the business plans. In addition, officials from the Army, Navy, and Air Force stated that they measured the savings produced as a result of BRAC 2005. For example, Army officials stated that closing bases in BRAC 2005 significantly reduced base operations support costs, such as by eliminating costs for trash collection, utilities, and information technology services. However, tracking completion of the recommendations and measuring savings did not enable the department to determine the success of the BRAC round in achieving its goals. For example, tracking completion of the recommendations establishing joint training centers did not give DOD insight into whether the military departments achieved the jointness goal by conducting more joint activities or operations. Similarly, measuring savings did not allow DOD to know whether it achieved the goal of reducing excess infrastructure, and in reviewing DOD’s data we found that the department ultimately did not have the needed data to calculate excess infrastructure disposed of during BRAC 2005. Key practices on monitoring performance and results highlight the importance of using performance measures to track an agency’s progress and performance, and stress that performance measures should include a baseline and target; should be objective, measurable, and quantifiable; and should include a time frame. The Standards for Internal Control in the Federal Government emphasizes that an agency’s management should track major agency achievements and compare these to the agencies’ plans, goals, and objectives. During BRAC 2005, DOD was not required to identify appropriate measures of effectiveness and track achievement of its goals. As a result, in March 2013, we recommended that, in the event of any future BRAC round, DOD identify appropriate measures of effectiveness and develop a plan to demonstrate the extent to which the department achieved the results intended from the implementation of the BRAC round. DOD did not concur with our recommendation, stating that military value should be the key driver for BRAC. However, we noted at the time that our recommendation does not undermine DOD’s reliance on military value as the primary selection criteria for DOD’s base realignment and closure candidate recommendations, and DOD can still prioritize military value while identifying measures that help determine whether DOD achieved the military value that it seeks. As of October 2017, DOD officials stated that no action to implement our recommendation is expected. We continue to believe that, if any future BRAC round is authorized, the department would benefit from measuring its achievement of goals. Further, this information would assist Congress in assessing the outcomes of any future BRAC rounds. Given that DOD did not concur with our 2013 recommendation and does not plan to act upon it, DOD is not currently required to identify appropriate measures of effectiveness and track achievement of its BRAC goals in future rounds. Without a requirement to identify and measure the achievement of goals for a BRAC round, DOD cannot demonstrate to Congress whether the implementation of any future BRAC round will improve efficiency and effectiveness or otherwise have the effect that the department says its proposed recommendations will achieve. If Congress would like to increase its oversight for any future BRAC round, requiring DOD to identify appropriate measures of effectiveness and track achievement of its goals would provide it with improved visibility over the expected outcomes. DOD Has Addressed Many but Not All Prior GAO Recommendations on BRAC 2005 and Has Further Opportunities to Improve Communications and Monitoring in Any Future BRAC Round DOD has implemented 33 of the 65 prior recommendations that we identified in our work since 2004, and it has the opportunity to address additional challenges regarding communications and monitoring to improve any future BRAC round. Specifically, for the BRAC analysis phase, DOD implemented 1 of 12 recommendations, and it has agreed to implement another 7 recommendations should Congress authorize any future BRAC round. Additionally, we found that DOD can improve its communications during the analysis phase. For the implementation phase, DOD implemented 28 of 39 recommendations, and it has agreed to implement another 3 recommendations. Further, we found it can improve monitoring of mission-related changes. For the disposal phase, DOD implemented 4 of 14 recommendations, and it has agreed to implement another 8 recommendations. DOD Plans to Address Some Prior GAO Recommendations about BRAC’s Analysis Phase, but Can Improve Communication during Data Collection DOD Plans to Address Some Prior GAO Recommendations If Congress Authorizes a Future BRAC Round Of the 12 recommendations we made from 2004 to 2016 to help DOD improve the BRAC analysis phase, DOD generally agreed with 6 of them and, as of October 2017, DOD had implemented 1. Specifically, DOD implemented our May 2004 recommendation to provide a more detailed discussion on assumptions used in its May 2005 report on BRAC recommendations. In addition, DOD stated it would address seven recommendations—the other five recommendations it agreed with and two it had previously nonconcurred with—affecting BRAC’s analysis phase in the event of any future BRAC round. These recommendations included better estimating information technology costs and improving ways of describing and entering cost data. DOD reported that the department is awaiting authorization of a future BRAC round prior to implementing these recommendations. Appendix III provides more information on our recommendations, DOD’s response, and DOD’s actions to date concerning the BRAC analysis phase. DOD Officials Cited Challenges with Communications during Data Collection DOD officials cited an additional challenge with communications during the BRAC 2005 analysis phase. Specifically, some military organizations we met with stated that they could not communicate to BRAC decision makers information outside of the data-collection process, which ultimately hindered analysis. For example: Officials from the Army Human Resources Command in Fort Knox, Kentucky, said that facilities data submitted during the data-collection process did not convey a complete picture of excess capacity at the installation, and officials at Fort Knox were unable to share the appropriate context or details because nondisclosure agreements prevented communication. Specifically, they stated that the data showed an overall estimate of Fort Knox’s excess capacity, but the data did not detail that the excess was not contiguous but rather based on space at 40 buildings spread throughout the installation. The officials stated that there was no way to communicate to decision makers during the data collection process that the facilities were ill- suited for relocating the Human Resources Command and would require significant renovation costs to host the command’s information technology infrastructure. The officials said that, because the needed details on the facility data were not communicated, the relocation moved forward without full consideration of alternatives for using better-suited excess space at other locations that would not require significant costs to renovate. As a result, the Army ultimately constructed a new headquarters building for the Human Resources Command at Fort Knox and DOD spent approximately $55 million more than estimated to complete this action. Officials at the Naval Consolidated Brig Charleston, South Carolina, told us that the lack of communication outside of the data-collection process resulted in decision makers not taking into account declining numbers of prisoners, leading to the construction of a new, oversized building in which to house prisoners. The officials said that the decision makers analyzing the facilities data did not consider the current correctional population; rather, the decision makers considered a correctional model based on the type of military fielded in World War II and the Korean and Vietnam wars—a force comprised of conscripted personnel that served longer tours and had higher correctional needs. Further, the officials said the decision makers did not consider that, in the 2000 to 2005 period, DOD increased the use of administrative separations from military service rather than incarcerate service members convicted of offenses, such as drug- related crimes or unauthorized absence, further reducing correctional needs. The officials said they did not have a mechanism to communicate this information outside of the data-collection process when decision makers were analyzing the facilities data. As a result, the BRAC Commission recommendation added 680 beds throughout the corrections system, increasing the Navy’s total confinement capacity to 1,200 posttrial beds. Specifically at Naval Consolidated Brig Charleston, the BRAC recommendation added 80 beds at a cost of approximately $10 million. However, the facility already had excess capacity prior to the 2005 BRAC recommendation, and its excess capacity further increased after adding 80 beds (see fig. 2). Air National Guard officials said that the lack of communication outside of the data-collection process in the BRAC analysis phase meant that they could not identify the specific location of excess facilities. Specifically, they said the facilities data showed that Elmendorf Air Force Base, Alaska, had sufficient preexisting space to accept units relocating from Kulis Air Guard Station, Alaska, a base slated for closure. However, without communicating with base officials, Air National Guard officials did not know that the space was not contiguous. As a result, officials stated that DOD ultimately needed to complete additional military construction to move the mission from Kulis Air Guard Station. The BRAC Commission increased the Air Force’s initial cost estimate by approximately $66 million in additional funds to implement the BRAC recommendation. U.S. Army Central officials stated that there was no communication outside of the data-collection process to allow DOD to fully consider workforce recruitment-related issues in deciding to move the U.S. Army Central headquarters to Shaw Air Force Base, South Carolina. While other criteria, such as military value, enhancing jointness, and enabling business process transformation, were considered in developing the recommendation, the officials stated that they were unable to communicate concerns regarding civilian hiring and military transfers. The officials said that since the headquarters’ move to Shaw Air Force Base from Fort McPherson, Georgia, they have had difficulties recruiting civilian employees, such as information technology personnel, to their facility because of its location. They also said that it has been harder to encourage Army personnel to move to Shaw Air Force Base due to a perception that there is a lack of promotional opportunities at an Army organization on an Air Force base. As a result, U.S. Army Central officials said morale surveys have indicated that these workforce issues have negatively affected mission accomplishment. The military departments and organizations we met with said that these concerns regarding the BRAC 2005 analysis phase were because DOD did not establish clear and consistent communications throughout different levels of authority in the department during data collection. According to Standards for Internal Control in the Federal Government, management should use relevant data from reliable sources and process these data into quality information that is complete and accurate. Further, management should communicate quality information down, across, up, and around reporting lines to all levels of the department. Given the unclear and inconsistent communications in the department during data collection, DOD decision makers had data that may have been outdated or incomplete. Additionally, the outdated and incomplete data hindered the BRAC 2005 analysis and contributed to additional costs and recruitment problems at some locations affected by BRAC 2005, as previously discussed. Officials stated that clear and consistent communications would have improved the flow of information between on-the-ground personnel and decision makers and could have better informed the BRAC decision-making process. For example, Army officials said that nondisclosure agreements hindered their ability to call personnel at some installations to confirm details about buildings and facilities in question. The Air Force’s Lessons Learned: BRAC 2005 report stated that site surveys could have communicated additional detail and generated more specific requirements than those generated in an automated software tool that the Air Force used for BRAC-related analysis. Navy officials said that, with limited communication, there were shortfalls in the decision-making process. Overall, officials from ASD (EI&E) and the military departments agreed that communication could be improved in the analysis phase of any future BRAC round. They also cited improved technology, such as geographic information system software and a new base stationing tool, as well as an increase in the amount of data collected as factors that may mitigate any effects of reduced communication if Congress authorizes any future BRAC round. Without taking steps to establish clear and consistent communication throughout the department during data collection, DOD risks collecting outdated and incomplete data in any future BRAC rounds that may hinder its analysis and the achievement of its stated goals for BRAC. DOD Has Addressed the Majority of Prior GAO Recommendations Affecting the BRAC Implementation Phase but Can Improve Monitoring DOD Has Implemented 28 of 39 Recommendations to Address Challenges To improve the implementation phase of the BRAC 2005 round, we made 39 recommendations between 2005 and 2016. DOD generally agreed with 32 and did not concur with 7 recommendations. As of October 2017, DOD had implemented 28 of these recommendations. DOD stated that it does not plan on implementing 8 of the recommendations, and action on 3 of the recommendations is pending. Our previous recommendations relate to issues including providing guidance for consolidating training, refining cost and performance data, and periodic reviews of installation- support standards, among others. Appendix IV provides more information on our recommendations, DOD’s response, and DOD’s actions to date concerning the BRAC implementation phase. DOD Officials Cited Challenges with Monitoring Mission-Related Changes during Implementation DOD officials identified challenges related to monitoring mission-related changes during the implementation of the BRAC 2005 recommendations, specifically when unforeseen circumstances developed that affected units’ ability to carry out their missions following implementation or added difficulties to fulfilling the intent of the recommendation. For example: During the implementation process, a final environmental impact statement at Eglin Air Force Base, Florida, contributed to the decision that only a portion of the initial proposed aircraft and operations would be established to fulfill the Joint Strike Fighter Initial Joint Training Site recommendation. Marine Corps officials stated that as a result of this environmental impact statement and the subsequent limitations, the Marine Corps decided to eventually move its training from Eglin Air Force Base to Marine Corps Air Station Beaufort, South Carolina. Despite these limitations, the Air Force constructed infrastructure for the Marine Corps’ use at Eglin Air Force Base in order to fulfill the minimum legal requirements of the recommendation. Specifically, the BRAC 2005 recommendation realigned the Air Force, Navy, and Marine Corps portions of the F-35 Joint Strike Fighter Initial Joint Training Site to Eglin Air Force Base. The Air Force’s goal and the initial proposal for the Joint Strike Fighter Initial Joint Training Site at Eglin Air Force Base was to accommodate 107 F-35 aircraft, with three Air Force squadrons of 24 F-35 aircraft each, one Navy squadron with 15 F-35 aircraft, and one Marine Corps squadron of 20 F-35 aircraft. In 2008, after the implementation phase began, DOD completed an environmental impact statement for the proposed implementation of the BRAC recommendations at Eglin Air Force Base. Based on the environmental impact statement and other factors, a final decision was issued in February 2009, stating that the Air Force would only implement a portion of the proposed actions for the recommendation, with a limit of 59 F-35 aircraft and reduced planned flight operations due to potential noise impacts, among other factors. This decision stated that the subsequent operational limitations would not be practical for use on a long-term basis but would remain in place until a supplemental environmental impact statement could be completed. After the final supplemental environmental impact statement was released, in June 2014 DOD decided to continue the limited operations established in the February 2009 decision. Marine Corps officials stated that, as a result of the February 2009 decision, the Marine Corps decided that it would eventually move its F-35 aircraft from Eglin Air Force Base to Marine Corps Air Station Beaufort. According to Marine Corps officials, by September 2009 the Marine Corps had developed a concept to prepare Marine Corps Air Station Beaufort to host its F-35 aircraft. A September 2010 draft supplemental environmental impact statement included updated operational data and found that the Marine Corps total airfield operations at Eglin Air Force Base would be reduced by 30.7 percent from the proposals first assessed in the 2008 final environmental impact statement. However, to abide by the BRAC recommendation, Marine Corps officials stated that the Marine Corps temporarily established an F-35 training squadron at Eglin Air Force Base in April 2010. Using fiscal year 2010 military construction funding, DOD spent approximately $27.7 million to create a landing field for use by the new Marine Corps F-35 training squadron mission at Eglin Air Force Base. Marine Corps officials stated that this construction occurred during the same period as the decision to relocate the F-35 training squadron to Marine Corps Air Station Beaufort. However, ASD (EI&E) officials stated that they did not know about this mission- related change, adding that they expected any change to be reported from the units to the responsible military department through the chain of command. However, the military departments did not have guidance to report in the business plans to ASD (EI&E) these mission- related changes during implementation; without this guidance, the changes related to the Marine Corps F-35 mission were not relayed to ASD (EI&E) through the Air Force. Officials from the Joint Strike Fighter training program at Eglin Air Force Base stated that this construction was finished in June 2012 and that it was never used by the Marine Corps. In February 2014, the Marine Corps F-35 training squadron left Eglin Air Force Base and was established at Marine Corps Air Station Beaufort. The Marine Corps does not plan on returning any F-35 aircraft from Marine Corps Air Station Beaufort to Eglin Air Force Base for joint training activities. Additionally, officials from the Armed Forces Chaplaincy Center stated that studies undertaken during the implementation phase determined that it would be difficult to fulfill the intent of a recommendation creating a joint center for religious training and education, yet the recommendation was implemented and included new construction with significantly greater costs than initial estimates. The BRAC 2005 recommendation consolidated Army, Navy, and Air Force religious training and education at Fort Jackson, South Carolina, establishing a Joint Center of Excellence for Religious Training and Education. Prior to the construction of facilities to accommodate this recommendation, the Interservice Training Review Organization conducted a study published in November 2006 that assessed the resource requirements and costs of consolidating and colocating the joint chaplaincy training at Fort Jackson. This study identified limitations in the feasibility of consolidating a joint training mission for the chaplains, including differences within the services’ training schedules and the limited availability of specific administrative requirements for each service, as well as limited instructors and curriculum development personnel. Despite the results of this study, in 2008 an approximately $11.5 million construction project began to build facilities for the Joint Center of Excellence for Religious Training and Education. However, ASD (EI&E) officials stated that they did not know about the results of the study. The military departments did not have guidance to report these mission-related changes, which ultimately were not relayed from the units to ASD (EI&E). Officials from the Armed Forces Chaplaincy Center stated that following the start of construction to accommodate the recommendation, the services completed additional studies in 2008 and 2011 that further identified limitations to the feasibility of joint training for the services’ chaplains. Overall, the services discovered that 95 percent of the religious training could not be conducted jointly. Moreover, the military departments have faced additional impediments to their respective missions for religious training and education. For example, the Army stated it could not house its junior soldiers alongside the senior Air Force chaplaincy students, and both the Navy and Air Force had to transport their chaplains to other nearby bases to receive service- specific training. Due to these challenges, officials from the Armed Forces Chaplaincy Center stated that the Air Force chaplains left Fort Jackson and returned to Maxwell Air Force Base, Alabama, in 2017, and the Navy has also discussed leaving Fort Jackson and returning to Naval Station Newport, Rhode Island. Standards for Internal Control in the Federal Government emphasizes the importance of monitoring the changes an entity faces so that the entity’s internal controls can remain aligned with changing objectives, environment, laws, resources, and risks. During the implementation phase of BRAC 2005, DOD did not have specific guidance for the military services to monitor mission-related changes that added difficulties to fulfilling the intent of BRAC recommendations. The Office of the Secretary of Defense required BRAC recommendation business plans to be submitted every 6 months and include information such as a listing of all actions needed to implement each recommendation, schedules for personnel movements between installations, updated cost and savings estimates based on better and updated information, and implementation completion time frames. In addition, in November 2008, the Deputy Under Secretary of Defense (Installations and Environment) issued a memorandum requiring the military departments and certain defense agencies to present periodic status briefings to the Office of the Secretary of Defense on implementation progress and to identify any significant issues impacting the ability to implement BRAC recommendations by the September 15, 2011, statutory deadline. The 6-month business plan updates and the memorandum on periodic briefings focused primarily on changes affecting the ability to fully implement the BRAC recommendations and on meeting the statutory deadline, but they did not provide specific guidance to inform ASD (EI&E) of mission-related changes that arose from unforeseen challenges during the implementation phase. According to a senior official with ASD (EI&E), if the organization responsible for a business plan identified a need to change the plan to fulfill the legal obligation of the recommendation by the statutory deadline, ASD (EI&E) reviewed any proposed changes through meetings with stakeholders involved in implementation. According to this official, the office typically only got involved with the implementation if the business plan was substantively out of line with the intent of the recommendation or if there was a dispute between two DOD organizations, such as two military departments. The official stated that any installation-level concerns had to be raised to the attention of ASD (EI&E) through the responsible military department’s chain of command. If a mission-related change was not raised through the military department’s chain of command, then ASD (EI&E) officials were not always aware of the details of such changes. ASD (EI&E) officials acknowledged that they did not know about all mission-related changes during implementation, such as with the Joint Strike Fighter recommendations, and they stated that there was no explicit guidance informing the military departments to report challenges and mission-related changes to ASD (EI&E). Senior officials from ASD (EI&E) stated that additional guidance would be appropriate in the event of any future BRAC round. This lack of specific guidance to monitor and report mission-related changes that arose during BRAC 2005 implementation ultimately resulted in inefficient use of space and extra costs for DOD. Without providing specific guidance to monitor and report mission-related changes that require significant changes to the recommendation business plans, DOD will not be able to effectively monitor the efficient use of space and the costs associated with implementing any future BRAC recommendations. Furthermore, DOD may not be able to effectively make adjustments in its plans to ensure that the department achieves its overall goals in any future BRAC rounds. DOD Has Addressed Some Prior Recommendations Related to the BRAC Disposal Phase and Plans to Address More Recommendations If Congress Authorizes a Future BRAC Round Of the 14 recommendations we made from 2007 to 2017 to help DOD address challenges affecting BRAC’s disposal phase, DOD generally agreed with 12 of them. As of October 2017, DOD had implemented 4 of the recommendations, with actions on 8 others pending. Our previous recommendations relate to three primary issues: guidance for communities managing the effects of the reduction or growth of DOD installations, the environmental cleanup process for closed properties, and the process for reusing closed properties for homeless assistance. Appendix V provides more information on our recommendations, DOD’s response, and DOD’s actions to date concerning the BRAC disposal phase. During our review, we identified an additional example of challenges in the disposal phase related to the environmental cleanup process. Specifically, officials representing Portsmouth, Rhode Island, stated that the city had issues with the environmental cleanup process resulting from BRAC 2005 changes at Naval Station Newport, Rhode Island. According to the site’s environmental impact statement, the land Portsmouth is to receive is contaminated and requires cleanup prior to transfer, and officials from the community stated that the Navy has not provided them with a clear understanding of a time frame for the environmental cleanup process needed to transfer the property. However, a senior official from the Navy stated that uncertainties in available funds and unforeseen environmental obstacles are common and prevent the Navy from projecting specific estimates for environmental cleanup time frames. The officials representing Portsmouth stated that, due to the lack of information from the Navy on a projected time frame for cleaning and transferring the property, representatives in the community have begun to discuss not wanting to take over the land and letting the Navy hold a public sale. We had previously recommended in January 2017 that DOD create a repository or method to record and share lessons learned about how various locations have successfully addressed environmental cleanup challenges. DOD concurred and actions are pending. Moreover, during our review we identified additional examples of challenges in the disposal phase related to the homeless assistance program. For example, officials representing the community of Wilmington, North Carolina, stated that they had issues with the homeless-assistance process regarding a closed Armed Forces Reserve Center. According to the officials, they did not know that there were legal alternatives to providing on-base property for homeless assistance. Wilmington officials stated that the city would have been willing to construct a homeless-assistance facility in a nonbase location, and use the closed property for a different purpose, which would have expedited the overall redevelopment process. According to the officials, the organization that took over the property for homeless-assistance purposes lacks the financial means to complete the entire project plan, and as of July 2017 it remains unfinished. We had previously recommended that DOD and the Department of Housing and Urban Development—which, with DOD, develops the implementing regulations for the BRAC homeless-assistance process—include information on legal alternatives to providing on-base property to expedite the redevelopment process, but DOD did not concur and stated no action is expected. Additionally, officials from New Haven, Connecticut, stated that the process of finding land suitable for a homeless assistance provider and converting an Army Reserve Center into a police academy took an undesirably long amount of time to complete. The officials stated that the process of preparing its redevelopment plan and transferring the property from DOD to the community lasted roughly 5 years from 2008 to 2013, and they suggested streamlining or expediting this process. As a result of these types of delays, many properties have not yet been transferred from DOD to the communities, and undisposed properties continue to increase caretaker costs. As of September 30, 2016, DOD had received approximately $172 million in payments for transfers, and it had spent approximately $275 million for caretaker costs of buildings and land prior to transferring property on closed installations during BRAC 2005. Implementing our prior recommendations related to the BRAC environmental cleanup and homeless-assistance process could help DOD expedite the disposal of unneeded and costly BRAC property, reduce its continuing fiscal exposure stemming from continuing to hold these properties, and ultimately improve the effectiveness of the disposal phase. Conclusions DOD has long faced challenges in reducing unneeded infrastructure, and on five different occasions DOD has used the BRAC process to reduce excess capacity and better match needed infrastructure to the force structure and to support military missions. In addition to using BRAC to reduce excess capacity, DOD also sought to promote jointness across the military departments and realign installations in the 2005 round, making the round the biggest, costliest, and most complex ever. While DOD finished its implementation of BRAC 2005 in September 2011 and continues to prepare some remaining sites for disposal, it did not measure whether and to what extent it achieved the round’s goals of reducing excess infrastructure, transforming the military, and promoting jointness. Because it did not measure whether the BRAC actions achieved these goals, DOD cannot demonstrate whether the military departments have improved their efficiency or effectiveness as a result of the BRAC 2005 actions. In October 2017, DOD officials stated the department does not plan to take action on our March 2013 recommendation to measure goals for any future BRAC round. Congress can take steps to improve its oversight of any future BRAC round, specifically by requiring DOD to identify and track appropriate measures of effectiveness. Congress would have enhanced information to make decisions about approving any future BRAC rounds, while DOD would be in a stronger position to demonstrate the benefits it achieves relative to the up-front implementation costs incurred for holding any future BRAC rounds. In addition, challenges in the analysis, implementation, and disposal phases of BRAC 2005 led to unintended consequences, such as increases in costs, workforce recruitment issues, and delayed disposal of closed properties. Limited or restricted communications throughout different levels of authority in the department during data collection hampered the ability of decision makers to receive as much relevant information as possible during BRAC 2005. If Congress authorizes any future BRAC round, ASD (EI&E) can encourage clear and consistent communication throughout DOD during the analysis phase, thereby helping personnel to address any potential problems that may arise. In addition, without specific guidance to monitor mission-related changes during the BRAC implementation phase, DOD did not fulfill the intent of some recommendations and spent millions of dollars to build infrastructure that was ultimately unused or underutilized. This lack of specific guidance meant that ASD (EI&E) was not aware of all mission- related changes. By instituting improvements to the analysis, implementation, and disposal phases in any future BRAC round, DOD could better inform decision making, better ensure that its infrastructure meets the needs of its force structure, and better position itself to gain congressional approval for additional rounds of BRAC in the future. Matter for Congressional Consideration Congress should consider, in any future BRAC authorization, a requirement for DOD to identify appropriate measures of effectiveness and to track the achievement of its goals. (Matter for Consideration 1) Recommendations for Executive Action We are making the following two recommendations to the Secretary of Defense. In the event of any future BRAC round, the Secretary of Defense should ensure that ASD (EI&E) and the military departments take steps to establish clear and consistent communications throughout the department during data collection. (Recommendation 1) In the event of any future BRAC round, the Secretary of Defense should ensure that ASD (EI&E) provides specific guidance for the military departments to monitor and report on mission-related changes that require significant changes to the recommendation business plans. (Recommendation 2) Agency Comments and Our Evaluation We provided a draft of this report for review and comment to DOD. In written comments, DOD objected to our matter for congressional consideration and concurred with both recommendations. DOD’s comments are summarized below and reprinted in their entirety in appendix VI. DOD also provided technical comments, which we incorporated as appropriate. DOD objected to our matter for congressional consideration that Congress should consider, in any future BRAC authorization, a requirement for DOD to identify appropriate measures of effectiveness and to track the achievement of its goals. DOD stated that, as advised by BRAC counsel, it believes this requirement would subvert the statutory requirement that military value be the priority consideration. However, as we noted when we originally directed this recommendation to the department in March 2013, our recommendation does not undermine DOD’s reliance on military value as the primary selection criteria for DOD’s BRAC candidate recommendations, and DOD can still prioritize military value while identifying measures that help determine whether DOD achieved the military value that it seeks. Congress enacting a requirement for DOD to identify appropriate measures of effectiveness and to track the achievement of its goals, alongside the requirement to prioritize military value, would address DOD’s concern about subverting a statutory requirement related to military value. Moreover, the department will likely have a better understanding of whether it achieved its intended results while still continuing to enhance military value. DOD concurred with our first recommendation that, in the event of any future BRAC round, the Secretary of Defense should ensure that ASD (EI&E) and the military departments take steps to establish clear and consistent communications throughout the department during data collection. In its letter, however, DOD stated it did not agree with our assertion that the perceptions of lower-level personnel are necessarily indicative of the process as a whole. We disagree with DOD’s statement that we relied on the perceptions of lower-level personnel. We obtained perceptions from senior personnel in the various military organizations deemed by DOD leadership to be the most knowledgeable. We then corroborated these perceptions with those from senior officials from the military departments, along with evidence obtained from the Air Force and Army lessons-learned reports. Moreover, DOD stated that the ability to gather data was not limited by the nondisclosure agreements or an inability to communicate with those participating in the BRAC process. While DOD concurred with our recommendation, we continue to believe it should consider the perceptions obtained from knowledgeable personnel that data gathering was limited by nondisclosure agreements or an inability to communicate throughout different levels of authority in the department during data collection. DOD also concurred with our second recommendation that, in the event of any future BRAC round, the Secretary of Defense should ensure that ASD (EI&E) provides specific guidance for the military departments to monitor and report on mission-related changes that require significant changes to the recommendation business plans. In its letter, DOD stated it would continue to provide guidance, as it did in the 2005 BRAC round, to encourage resolution at the lowest possible level, with Office of the Secretary of Defense involvement limited to review and approval of any necessary changes to the business plans. However, as we reported, if a mission-related change was not raised through the military department’s chain of command, ASD (EI&E) officials stated that they were not always aware of the details of such changes, hence the need for our recommendation. By providing specific guidance to monitor and report mission-related changes that require significant changes to the recommendation business plans, DOD may be able to more effectively make adjustments in its plans to ensure that the department achieves its overall goals in any future BRAC rounds. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 15 days from the report date. At that time, we will send copies to the appropriate congressional committees; the Secretary of Defense; the Secretaries of the Army, Navy, and Air Force; and the Commandant of the Marine Corps. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4523 or leporeb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. Appendix I: Selected Local Economic Data for Communities Affected by the 2005 BRAC Round Closures Selected economic indicators for the 20 communities surrounding the 23 Department of Defense (DOD) installations closed in the 2005 Base Realignment and Closure (BRAC) round vary compared to national averages. In our analysis, we used annual unemployment and real per capita income growth rates compiled by the U.S. Bureau of Labor Statistics and the U.S. Bureau of Economic Analysis as broad indicators of the economic health of those communities where installation closures occurred. Our analyses of the U.S. Bureau of Labor Statistics annual unemployment data for 2016, the most recent data available, showed that 11 of the 20 closure communities had unemployment rates at or below the national average of 4.9 percent for the period from January through December 2016. Another seven communities had unemployment rates that were higher than the national average but at or below 6.0 percent. Only two communities had unemployment rates above 8.0 percent (see fig. 3). Of the 20 closure communities, Portland-South Portland, Maine (Naval Air Station Brunswick) had the lowest unemployment rate at 3.0 percent and Yukon-Koyukuk, Alaska (Galena Forward Operating Location) had the highest rate at 17.2 percent. We also used per capita income data from the U.S. Bureau of Economic Analysis between 2006 and 2016 to calculate annualized growth rates and found that 11 of the 20 closure communities had annualized real per capita income growth rates that were higher than the national average of 1.0 percent (see fig. 4). The other 9 communities had rates that were below the national average. Of the 20 communities affected, Yukon- Koyukuk, Alaska (Galena Forward Operating Location) had the highest annualized growth rate at 4.6 percent and Gulfport-Biloxi-Pascagoula, Mississippi (Mississippi Army Ammunition Plant and Naval Station Pascagoula) had the lowest rate at -0.1 percent. Appendix II: Objectives, Scope, and Methodology The objectives of our review were to assess the extent that the Department of Defense (DOD) (1) measured the achievement of goals for reducing excess infrastructure, transforming the military, and promoting jointness for the 2005 Base Realignment and Closure (BRAC) round and (2) implemented prior GAO recommendations and addressed any additional challenges faced in BRAC 2005 to improve performance for any future BRAC round. In addition, we describe how current economic indicators for the communities surrounding the 23 closed bases in BRAC 2005 compare to national averages. For all objectives, we reviewed the 2005 BRAC Commission’s September 2005 report to the President, policy memorandums, and guidance on conducting BRAC 2005. We also reviewed other relevant documentation such as supporting BRAC analyses prepared by the military services or units related to the development of BRAC 2005 recommendations. We interviewed officials with the Office of the Assistant Secretary of Defense for Energy, Installations, and Environment; the Army; the Navy; the Air Force; the Marine Corps; the U.S. Army Reserve Command; and the National Guard Bureau. We also conducted site visits to Connecticut, Indiana, Kentucky, Massachusetts, North Carolina, Rhode Island, and South Carolina. We met with 26 military units or organizations, such as Air Force wings and Army and Navy installations’ Departments of Public Works, and 12 communities involved with BRAC 2005 recommendations. These interviews provide examples of any challenges faced by each individual party, but information obtained is not generalizable to all parties involved in the BRAC process. We selected locations for site visits based on ensuring geographic diversity and a mix of types of BRAC recommendations (closures, transformation, or jointness), and having at least one installation from or community associated with each military department. To assess the extent that DOD measured the achievement of goals for reducing excess infrastructure, transforming the military, and promoting jointness for BRAC 2005, we met with officials to discuss measurement of goals and requested any related documentation. We compared DOD’s efforts to Standards for Internal Control in the Federal Government, which emphasizes that an agency’s management should track major agency achievements and compare these to the agencies’ plans, goals, and objectives. We also tried to calculate the excess infrastructure disposed of during BRAC 2005; however, DOD’s data were incomplete. Specifically, in reviewing the square footage and plant replacement value data from DOD’s Cost of Base Realignment Actions model, we found that data from several bases were not included. Additionally, a senior official with the Office of the Assistant Secretary of Defense for Energy, Installations, and Environment stated the data provided were not the most current data used during BRAC 2005 and the office did not have access to the complete data. We also tried to corroborate the square footage and plant replacement value data from the Cost of Base Realignment Actions model to DOD’s 2005 Base Structure Report, but we found the data to be incomparable. As such, we determined that the incomplete and outdated data were not sufficiently reliable to calculate the excess infrastructure disposed of during BRAC 2005. To assess the extent that DOD implemented prior GAO recommendations on BRAC 2005 and addressed any additional challenges faced in BRAC 2005 to improve performance for any future BRAC round, we reviewed our prior reports and testimonies on BRAC 2005 to identify recommendations made and determined whether those recommendations applied to the analysis, implementation, or disposal phase of BRAC 2005. We then identified whether DOD implemented recommendations we made by discussing the status of recommendations with agency officials and obtaining copies of agency documents supporting the recommendations’ implementation. We also met with officials to identify what challenges, if any, continue to be faced and what opportunities exist to improve the analysis, implementation, and disposal phases for any future BRAC round. For the analysis phase, we reviewed military service lessons-learned documents. For the implementation phase, we reviewed business plans supporting the implementation of the BRAC 2005 recommendations and other applicable documentation, such as a workforce planning study and an environmental impact statement affecting the implementation of some recommendations. For the disposal phase, we analyzed DOD’s caretaker costs for closed bases that it has not yet transferred. We compared information about challenges in the analysis, implementation, and disposal phases to criteria for communications, monitoring, and risk assessments in Standards for Internal Control in the Federal Government. To describe how current economic indicators for the communities surrounding the 23 closed bases in BRAC 2005 compare to national averages, we collected economic indicator data on the communities surrounding closed bases from the Bureau of Labor Statistics and the Bureau of Economic Analysis in order to compare them with national averages. To identify the communities surrounding closed bases, we focused our review on the 23 major DOD installations closed in the BRAC 2005 round and their surrounding communities. For BRAC 2005, DOD defined major installation closures as those that had a plant replacement value exceeding $100 million. We used information from our 2013 report, which identified the major closure installations. We then defined the “community” surrounding each major installation by (1) identifying the economic area in DOD’s Base Closure and Realignment Report, which linked a metropolitan statistical area, a metropolitan division, or a micropolitan statistical area to each installation, and then (2) updating those economic areas based on the most current statistical areas or divisions, as appropriate. Because DOD’s BRAC report did not identify the census area for the Galena Forward Operating Location in Alaska or the Naval Weapons Station Seal Beach Detachment in Concord, California, we identified the town of Galena as within the Yukon-Koyukuk Census Area and the city of Concord in the Oakland-Hayward-Berkeley, CA Metropolitan Division, and our analyses used the economic data for these areas. See table 1 for a list of the major DOD installations closed in BRAC 2005 and their corresponding economic areas. To compare the economic indicator data of the communities surrounding the 23 major DOD installations closed in the BRAC 2005 round to U.S. national averages, we collected and analyzed calendar year 2016 unemployment data from the U.S. Bureau of Labor Statistics and calendar year 2006 through 2016 per capita income growth data, along with data on inflation, from the U.S. Bureau of Economic Analysis which we used to calculate annualized real per capita income growth rates. Calendar year 2016 was the most current year for which local area data were available from these databases. We assessed the reliability of these data by reviewing U.S. Bureau of Labor Statistics and U.S. Bureau of Economic Analysis documentation regarding the methods used by each agency in producing their data and found the data to be sufficiently reliable to report 2016 annual unemployment rates and 2006 through 2016 real per capita income growth. We used unemployment and annualized real per capita income growth rates as key performance indicators because (1) DOD used these measures in its community economic impact analysis during the BRAC location selection process and (2) economists commonly use these measures in assessing the economic health of an area over time. While our assessment provides an overall picture of how these communities compare with the national averages, it does not isolate the condition, or the changes in that condition, that may be attributed to a specific BRAC action. We conducted this performance audit from April 2017 to March 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: GAO Reviews Related to the BRAC 2005 Analysis Phase, Related Recommendations, and DOD Actions To improve the analysis phase of the 2005 Base Realignment and Closure (BRAC) round, we made 12 recommendations between 2004 and 2016. The Department of Defense (DOD) fully concurred with 4, partially concurred with 2, and did not concur with 6 recommendations. It implemented 1 of the 12 recommendations (see table 2). According to DOD officials, DOD will be unable to take actions on 7 recommendations unless Congress authorizes any future BRAC round. Appendix IV: GAO Reviews Related to the BRAC 2005 Implementation Phase, Related Recommendations, and DOD Actions To improve the implementation phase of the 2005 Base Realignment and Closure (BRAC) round, we made 39 recommendations between 2005 and 2016. The Department of Defense (DOD) fully concurred with 17, partially concurred with 15, and did not concur with 7 recommendations. DOD implemented 28 of them (see table 3). Appendix V: GAO Reviews Related to the BRAC 2005 Disposal Phase, Related Recommendations, and DOD Actions To improve the disposal phase of the 2005 Base Realignment and Closure (BRAC) round, we made 14 recommendations between 2007 and 2017. The Department of Defense (DOD) fully concurred with 7, partially concurred with 5, and did not concur with 2 recommendations. DOD implemented 4 of them with 8 recommendations pending further action (see table 4). According to DOD officials, DOD will be unable to take actions on 5 of the 8 pending recommendations until another BRAC round is authorized. Appendix VI: Comments from the Department of Defense Appendix VII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Gina Hoffman (Assistant Director), Tracy Barnes, Irina Bukharin, Timothy Carr, Amie Lesser, John Mingus, Kevin Newak, Carol Petersen, Richard Powelson, Clarice Ransom, Jodie Sandel, Eric Schwab, Michael Silver, and Ardith Spence made key contributions to this report. Related GAO Products High-Risk Series: Progress on Many High-Risk Areas, While Substantial Efforts Needed on Others. GAO-17-317. Washington, D.C.: February 15, 2017. Military Base Realignments and Closures: DOD Has Improved Environmental Cleanup Reporting but Should Obtain and Share More Information. GAO-17-151. Washington, D.C.: January 19, 2017. Military Base Realignments and Closures: More Guidance and Information Needed to Take Advantage of Opportunities to Consolidate Training. GAO-16-45. Washington, D.C.: February 18, 2016. Military Base Realignments and Closures: Process for Reusing Property for Homeless Assistance Needs Improvements. GAO-15-274. Washington, D.C.: March 16, 2015. DOD Joint Bases: Implementation Challenges Demonstrate Need to Reevaluate the Program. GAO-14-577. Washington, D.C.: September 19, 2014. Defense Health Care Reform: Actions Needed to Help Realize Potential Cost Savings from Medical Education and Training. GAO-14-630. Washington, D.C: July 31, 2014. Defense Infrastructure: DOD’s Excess Capacity Estimating Methods Have Limitations. GAO-13-535. Washington, D.C.: June 20, 2013. Defense Infrastructure: Communities Need Additional Guidance and Information to Improve Their Ability to Adjust to DOD Installation Closure or Growth. GAO-13-436. Washington, D.C.: May 14, 2013. Military Bases: Opportunities Exist to Improve Future Base Realignment and Closure Rounds. GAO-13-149. Washington, D.C.: March 7, 2013. DOD Joint Bases: Management Improvements Needed to Achieve Greater Efficiencies. GAO-13-134. Washington, D.C.: November 15, 2012. Military Base Realignments and Closures: The National Geospatial- Intelligence Agency’s Technology Center Construction Project. GAO-12-770R. Washington, D.C.: June 29, 2012. Military Base Realignments and Closures: Updated Costs and Savings Estimates from BRAC 2005. GAO-12-709R. Washington, D.C.: June 29, 2012. Military Base Realignments and Closures: Key Factors Contributing to BRAC 2005 Results. GAO-12-513T. Washington, D.C.: March 8, 2012. Excess Facilities: DOD Needs More Complete Information and a Strategy to Guide Its Future Disposal Efforts. GAO-11-814. Washington, D.C.: September 19, 2011. Military Base Realignments and Closures: Review of the Iowa and Milan Army Ammunition Plants. GAO-11-488R. Washington, D.C.: April 1, 2011. Defense Infrastructure: High-Level Federal Interagency Coordination Is Warranted to Address Transportation Needs beyond the Scope of the Defense Access Roads Program. GAO-11-165. Washington, D.C.: January 26, 2011. Military Base Realignments and Closures: DOD Is Taking Steps to Mitigate Challenges but Is Not Fully Reporting Some Additional Costs. GAO-10-725R. Washington, D.C.: July 21, 2010. Defense Infrastructure: Army Needs to Improve Its Facility Planning Systems to Better Support Installations Experiencing Significant Growth. GAO-10-602. Washington, D.C.: June 24, 2010. Military Base Realignments and Closures: Estimated Costs Have Increased While Savings Estimates Have Decreased Since Fiscal Year 2009. GAO-10-98R. Washington, D.C.: November 13, 2009. Military Base Realignments and Closures: Transportation Impact of Personnel Increases Will Be Significant, but Long-Term Costs Are Uncertain and Direct Federal Support Is Limited. GAO-09-750. Washington, D.C.: September 9, 2009. Military Base Realignments and Closures: DOD Needs to Update Savings Estimates and Continue to Address Challenges in Consolidating Supply- Related Functions at Depot Maintenance Locations. GAO-09-703. Washington, D.C.: July 9, 2009. Defense Infrastructure: DOD Needs to Periodically Review Support Standards and Costs at Joint Bases and Better Inform Congress of Facility Sustainment Funding Uses. GAO-09-336. Washington, D.C.: March 30, 2009. Military Base Realignments and Closures: DOD Faces Challenges in Implementing Recommendations on Time and Is Not Consistently Updating Savings Estimates. GAO-09-217. Washington, D.C.: January 30, 2009. Military Base Realignments and Closures: Army Is Developing Plans to Transfer Functions from Fort Monmouth, New Jersey, to Aberdeen Proving Ground, Maryland, but Challenges Remain. GAO-08-1010R. Washington, D.C.: August 13, 2008. Defense Infrastructure: High-Level Leadership Needed to Help Communities Address Challenges Caused by DOD-Related Growth. GAO-08-665. Washington, D.C.: June 17, 2008. Defense Infrastructure: DOD Funding for Infrastructure and Road Improvements Surrounding Growth Installations. GAO-08-602R. Washington, D.C.: April 1, 2008. Military Base Realignments and Closures: Higher Costs and Lower Savings Projected for Implementing Two Key Supply-Related BRAC Recommendations. GAO-08-315. Washington, D.C.: March 5, 2008. Defense Infrastructure: Realignment of Air Force Special Operations Command Units to Cannon Air Force Base, New Mexico. GAO-08-244R. Washington, D.C.: January 18, 2008. Military Base Realignments and Closures: Estimated Costs Have Increased and Estimated Savings Have Decreased. GAO-08-341T. Washington, D.C.: December 12, 2007. Military Base Realignments and Closures: Cost Estimates Have Increased and Are Likely to Continue to Evolve. GAO-08-159. Washington, D.C.: December 11, 2007. Military Base Realignments and Closures: Impact of Terminating, Relocating, or Outsourcing the Services of the Armed Forces Institute of Pathology. GAO-08-20. Washington, D.C.: November 9, 2007. Military Base Realignments and Closures: Transfer of Supply, Storage, and Distribution Functions from Military Services to Defense Logistics Agency. GAO-08-121R. Washington, D.C.: October 26, 2007. Defense Infrastructure: Challenges Increase Risks for Providing Timely Infrastructure Support for Army Installations Expecting Substantial Personnel Growth. GAO-07-1007. Washington, D.C.: September 13, 2007. Military Base Realignments and Closures: Plan Needed to Monitor Challenges for Completing More Than 100 Armed Forces Reserve Centers. GAO-07-1040. Washington, D.C.: September 13, 2007. Military Base Realignments and Closures: Observations Related to the 2005 Round. GAO-07-1203R. Washington, D.C.: September 6, 2007. Military Base Closures: Projected Savings from Fleet Readiness Centers Likely Overstated and Actions Needed to Track Actual Savings and Overcome Certain Challenges. GAO-07-304. Washington, D.C.: June 29, 2007. Military Base Closures: Management Strategy Needed to Mitigate Challenges and Improve Communication to Help Ensure Timely Implementation of Air National Guard Recommendations. GAO-07-641. Washington, D.C.: May 16, 2007. Military Base Closures: Opportunities Exist to Improve Environmental Cleanup Cost Reporting and to Expedite Transfer of Unneeded Property. GAO-07-166. Washington, D.C.: January 30, 2007. Military Bases: Observations on DOD’s 2005 Base Realignment and Closure Selection Process and Recommendations. GAO-05-905. Washington, D.C.: July 18, 2005. Military Bases: Analysis of DOD’s 2005 Selection Process and Recommendations for Base Closures and Realignments. GAO-05-785. Washington, D.C.: July 1, 2005. Military Base Closures: Observations on Prior and Current BRAC Rounds. GAO-05-614. Washington, D.C.: May 3, 2005. Military Base Closures: Assessment of DOD’s 2004 Report on the Need for a Base Realignment and Closure Round. GAO-04-760. Washington, D.C.: May 17, 2004.
Why GAO Did This Study The 2005 BRAC round was the costliest and most complex BRAC round ever. In contrast to prior rounds, which focused on the goal of reducing excess infrastructure, DOD's goals for BRAC 2005 also included transforming the military and fostering joint activities. GAO was asked to review DOD's performance outcomes from BRAC 2005. This report examines the extent to which DOD has (1) measured the achievement of its goals for BRAC 2005 and (2) implemented prior GAO recommendations on BRAC 2005 and addressed any additional challenges to improve performance for any future BRAC round. GAO reviewed relevant documents and guidance; met with a nongeneralizable selection of 26 military organizations and 12 communities involved with BRAC 2005; and interviewed DOD officials. What GAO Found The Department of Defense (DOD) components generally did not measure the achievement of goals—reducing excess infrastructure, transforming the military, and promoting joint activities among the military departments—for the 2005 Base Realignment and Closure (BRAC) round. In March 2013, GAO recommended that, for any future BRAC round, DOD identify measures of effectiveness and develop a plan to demonstrate achieved results. DOD did not concur and stated that no action is expected. Without a requirement for DOD to identify measures of effectiveness and track achievement of its goals, Congress will not have full visibility over the expected outcomes or achievements of any future BRAC rounds. Of the 65 recommendations GAO has made to help DOD address challenges it faced in BRAC 2005, as of October 2017 DOD had implemented 33 of them (with 18 pending DOD action). DOD has not addressed challenges associated with communication and monitoring mission-related changes. Specifically: Some military organizations stated that they could not communicate to BRAC decision makers information outside of the data-collection process because DOD did not establish clear and consistent communications. For example, Army officials at Fort Knox, Kentucky, stated that there was no way to communicate that excess facilities were ill-suited for relocating the Human Resources Command and moved forward without full consideration of alternatives for using better-suited excess space at other locations. As a result, DOD spent about $55 million more than estimated to construct a new building at Fort Knox. DOD implemented BRAC recommendations that affected units' ability to carry out their missions because DOD lacked specific guidance to monitor and report on mission-related changes. For example, DOD spent about $27.7 million on a landing field for a Marine Corps F-35 training squadron at Eglin Air Force Base, Florida, even though it had been previously decided to station the F-35 aircraft and personnel at another base. By addressing its communication and monitoring challenges, DOD could better inform decision making, better ensure that its infrastructure meets the need of its force structure, and better position itself to achieve its goals in any future BRAC round. What GAO Recommends Congress should consider requiring DOD to identify and track appropriate measures of effectiveness in any future BRAC round. Also, GAO recommends that in any future BRAC round DOD (1) take steps to establish clear and consistent communications while collecting data and (2) provide specific guidance to the military departments to monitor and report on mission-related changes during implementation. GAO also continues to believe that DOD should fully implement GAO's prior recommendations on BRAC 2005. DOD objected to Congress requiring DOD to identify and track performance measures, but GAO continues to believe this to be an appropriate action for the reasons discussed in the report. Lastly, DOD concurred with the two recommendations.
gao_GAO-18-554
gao_GAO-18-554_0
Background Federal statutes and a number of executive orders reflect the federal government’s policy to encourage the participation of small businesses, including those owned and controlled by socially and economically disadvantaged individuals, in federal contracting. One key statute is the Small Business Act, which established SBA and directed it to aid, counsel, assist, and protect the interests of small business concerns, among other things. The Small Business Act, as amended over the years, as well as executive orders, emphasize the government’s policies on contracting with SDBs and businesses owned by women and minorities. The Small Business Act sets a minimum government-wide goal for small business participation of not less than 23 percent of the total value of all prime contracts for each fiscal year and makes SBA responsible for reporting annually to the President and Congress on agencies’ progress in meeting this goal, and making this information available on a public website. SBA reported that the federal government reached this goal for the fifth consecutive year in fiscal year 2017, awarding about 24 percent of total federal contract dollars to small businesses. SBA also negotiates specific goals with agencies to ensure the government-wide goal is met. Each agency’s progress toward meeting its goals is generally based on the percentage of obligations on contracts with small businesses. Categories of Specified Businesses The three categories of businesses we examined for this report are small disadvantaged, minority-owned, and women-owned. Small disadvantaged business. Because SBA’s 8(a) business development program and SDB criteria are similar, in this report we use the term “small disadvantaged business” or “SDB” to refer to both categories of businesses. Section 8(a) of the Small Business Act established the 8(a) business development program, which authorizes the SBA to enter into contracts with other agencies and award subcontracts for performing those contracts to firms eligible for program participation. To be certified under the 8(a) program, a business must, in general, satisfy requirements for size, be at least 51 percent unconditionally owned and controlled by one or more socially and economically disadvantaged individuals who are U.S. citizens, and demonstrate potential for success. Similar to the 8(a) program, SDBs are defined as those that are primarily owned and controlled by one or more socially and economically disadvantaged individuals, though there are some differences in criteria for the 8(a) program and SDB classification. For example, businesses in the 8(a) program must demonstrate the potential for success and business principals must demonstrate good character, but the requirements to demonstrate these do not apply to SDB classification. A business’s self-identification as SDB in the federal government’s System for Award Management does not automatically lead to acceptance into SBA’s 8(a) business development program. Minority-owned business. Businesses of all sizes that are at least 51 percent owned by one or more members of a minority group may self- identify as minority-owned businesses in the federal government’s System for Award Management. Minority-owned businesses are further broken down into businesses owned by Asian-Pacific-, Subcontinent- Asian-, Black-, Hispanic-, Native-Americans, and other. Women-owned business. Businesses of all sizes that are at least 51 percent owned by one or more women and whose management and daily business operations are controlled by one or more women may self- identify as a women-owned business in the System for Award Management. These three categories of specified businesses overlap. For example, an SDB may be women-owned and therefore be counted in FPDS-NG as both an SDB and a women-owned business. To avoid double-counting when presenting consolidated data, we counted obligations and businesses classified under more than one category only once. Federal Advertising Activities As we have previously reported, there are several types of activities that are supported by federal advertising contracts. Table 1 provides descriptions and examples of some of these activities. Federal Agencies Have on Average Directed 13 Percent of Advertising Contract Obligations to Specified Businesses over the Past 5 Years Specified Businesses Generally Received an Increasing Share of Advertising Dollars Over the past 5 fiscal years (2013 through 2017), federal agencies have obligated on average about $870 million annually for advertising contracts, with about 13 percent (approximately $114 million annually) of these obligations going to specified businesses. This share of advertising contract obligations going to these businesses over fiscal years 2013 through 2017 was consistent with the share of total federal contracting obligations going to these businesses (also on average 13 percent over this period). Advertising contract obligations to specified businesses and the number of these businesses receiving advertising contract obligations have both generally increased over fiscal years 2013 through 2017. The amount of advertising contract obligations going to these businesses nearly doubled from fiscal year 2013 to 2017 (from $75 million to $147 million) and also increased as a percent of total advertising contract obligations (from 9 percent of these obligations to 16 percent). Specified businesses also represented an increasing share of businesses receiving advertising contract obligations, from 30 percent (194 businesses) in fiscal year 2013 to 39 percent (250 businesses) in fiscal year 2017. Figure 1 shows advertising contract obligations to specified businesses and the number of these businesses receiving these obligations over fiscal years 2013 through 2017. In the 5 years from fiscal year 2013 through 2017, a relatively small number of specified businesses received a relatively large amount of federal advertising contract obligations. For example, the top five businesses received about 40 percent of annual advertising contract obligations to specified businesses over the 5-year period. Consistent with findings from our previous work, obligations were also concentrated among a relatively small number of contracts. Figure 2 shows the distribution of advertising contract obligations among specified businesses, with amounts going to the five largest businesses (in terms of advertising contract obligations received) and all others. Federal advertising contract obligations to all three categories of specified businesses generally increased between fiscal years 2013 and 2017, although some years showed decreases. (The amount going to women- owned businesses declined between fiscal years 2014 and 2015 and the amounts going to minority-owned businesses and SDBs declined between fiscal years 2016 and 2017.) The most notable increase over the 5-year timeframe, both in dollars and percentage terms, was in the women-owned category, which increased by $56 million, or 93 percent. Figure 3 shows the amounts obligated to each specified business category, and to the three categories combined. Table 2 in appendix II shows the amounts obligated to each specified business category, in dollars and as a percentage of federal advertising contract obligations, in each of the 5 years. SBA officials we interviewed told us that a program they started in 2011, the Women-Owned Small Business Federal Contracting Program, may have accounted for some of the increase in contracting rates with women- owned businesses over the past 5 years. This is because the program aims to help women-owned small businesses have an equal opportunity to participate in federal contracting and to assist agencies in achieving their goals for contracting with women-owned small businesses. The program generally allows women-owned small businesses to compete for set-aside contracts or receive sole source awards in industries where these businesses are underrepresented or substantially underrepresented. Changes in advertising contract obligations to specified businesses were in some cases associated with a small number of contracts. For example, the $29 million increase in advertising contract obligations to women- owned businesses between fiscal years 2016 and 2017 was due in large part to two contracts with an advertising agency with combined obligations of about $22 million in fiscal year 2017. In addition, two contracts that had each been classified under both the SDB and minority- owned categories contributed to the decrease in these two categories between fiscal years 2016 and 2017. Obligations to these two contracts declined by about $16 million over this period, a substantial portion of the overall declines in these two categories. (Obligations to SDBs declined by about $23 million; those to minority-owned businesses declined by about $21 million.) Although obligations to the SDB and minority-owned categories decreased from fiscal year 2016 to 2017, the numbers of these businesses receiving advertising contract obligations both increased. The number of SDBs receiving these obligations went from 123 to 134; the number of minority-owned businesses went from 95 to 98. Federal agencies are also required to set-aside procurements exclusively for small businesses or businesses in the 8(a) program under certain circumstances, and specific authorities exist to allow award of a contract on a sole source basis to a business in the 8(a) program. However, these authorities were in place prior to fiscal year 2013 and therefore, according to SBA officials, it is unlikely they would have caused a change in contracting activity over the past 5 years. As mentioned above, businesses may be classified as more than one category, and thus there is overlap in obligations and contracts among specified business categories. For example, about one-quarter ($147 million) of the $570 million in advertising contract obligations directed to specified businesses over the 5-year period went to businesses classified under all three categories. Figure 4 shows the amount of advertising contract obligations going to each business category and combination of categories. Over Half of Federal Advertising Contract Obligations to Minority- Owned Businesses Went to Those Owned by Hispanic-Americans Among the different types of minority-owned businesses, those classified as being owned by Hispanic-Americans received the most obligations (just over half) from federal advertising contracts over fiscal years 2013 through 2017. Figure 5 shows the breakdown of amounts obligated over these fiscal years to minority-owned businesses. As with the other specified business categories, advertising obligations to specific minority groups were concentrated among a relatively small number of businesses. For example, advertising contract obligations to one particular Native-American-owned business—for graphic design, print, and other communications services—represented 34 percent of all obligations to Native-American-owned businesses. See table 3 in appendix II for additional details on each business category’s contracts. DOD, DHS, and HHS Directed the Most Advertising Contract Obligations to Specified Businesses, though Other Agencies Directed Greater Percentages of These Obligations to the Businesses DOD, DHS, and HHS Were Responsible for Nearly Three-Fourths of Federal Advertising Obligations to Specified Businesses The departments of Defense (DOD), Homeland Security (DHS), and Health and Human Service (HHS) were responsible for 73 percent of the $570 million of federal advertising contract obligations that went to specified businesses over fiscal years 2013 through 2017. Thirty four other agencies were responsible for the remaining 27 percent of these obligations. Figure 6 shows the breakdown of total federal advertising contract obligations, with the amount of obligations going to specified businesses, and amounts obligated by DOD, HHS, DHS, and all other agencies. For each of the 5 years we reviewed, DOD, HHS, and DHS were consistently the top three agencies in terms of the amount of advertising contract obligations they directed to specified businesses. Additionally, all three generally increased the amounts they obligated to these businesses. For example, in fiscal year 2013, these three agencies obligated over 60 percent of all federal advertising contract obligations to specified businesses; in 2017 they accounted for more than 80 percent of these obligations. Figure 7 shows breakdowns of these and all other agencies’ advertising contract obligations to specified businesses. Much of the increase in these obligations from year to year is associated with increases in obligations by DOD, DHS, and HHS. For example, advertising contract obligations to these businesses increased by about $37 million between fiscal years 2015 and 2016, with these three agencies responsible for about $22 million, or 60 percent, of the increase. DOD, DHS, and HHS are also among the agencies that obligated the most to advertising contracts overall. Together they obligated about $3.4 billion for these types of contracts over the 5-year period, which represents 79 percent of the federal government’s obligations. DOD obligated the most—over $2.6 billion—to advertising contracts over the 5- year period, which accounted for over 60 percent of these obligations over fiscal years 2013 through 2017. Table 4 in appendix II provides more details on the agencies that obligated the most overall for advertising contracts and those that directed the most to specified businesses. In our prior report on advertising contract obligations going to small disadvantaged and minority-owned businesses, we highlighted annual obligations data for five agencies. As an update to that analysis, we examined annual advertising contract obligations to the five agencies that obligated the most on advertising contracts over the past 5 years – DOD, DHS, HHS, and the departments of Transportation (DOT) and Veterans Affairs (VA). Figure 8 illustrates these agencies’ advertising contract obligations and the percent going to specified businesses in each year. As shown in the figure above, top-spending agencies’ obligations to specified businesses fluctuated over fiscal years 2013 through 2017. DOD. DOD’s obligations to specified businesses increased for most of the fiscal years over the 5-year period, regardless of whether its total advertising obligations increased or decreased. For example, in fiscal year 2016, DOD’s total advertising obligations declined by over $100 million; however, its obligations to specified business categories increased. In fiscal year 2017, DOD obligated the most of any agency to specified businesses. HHS. Similarly, HHS, which obligated approximately $151 million to specified businesses, the most of any agency over the 5-year period, also increased its obligations to those businesses regardless of its overall advertising obligations from year to year. For example, from fiscal years 2016 to 2017 HHS’ advertising contract obligations to specified businesses increased from $35 to $37 million, even though they declined as a percentage of its overall advertising contract obligations, going from 65 percent to 57 percent. DOT. DOT generally increased its total advertising obligations during the 5-year period from approximately $46 million in 2013 to $57 million in 2017. However, during this time DOT’s obligations to specified businesses generally decreased, from approximately $1.8 million in 2013 to approximately $560,000 in fiscal year 2017. DHS. DHS generally increased its total advertising obligations each year of the 5-year period and generally increased its obligations to specified businesses. DHS obligated the third largest amount of money (behind HHS and DOD) to these businesses from fiscal years 2013 through 2017. VA. VA has generally decreased its total advertising obligations from approximately $63 million in fiscal year 2013 to approximately $15 million in fiscal year 2017, and its obligations to specified businesses, from approximately $8 million in fiscal year 2013 to approximately $1.3 million in fiscal year 2017. Table 5 in appendix II shows the 20 agencies that have obligated the most for advertising contracts over fiscal years 2013 through 2017 and the amounts they directed to specified businesses. Agencies with Greater Percentages of Advertising Contract Obligations Going to Specified Businesses Generally Obligated Lower Amounts Overall In several cases agencies directed more than half of their advertising contract obligations to specified businesses, though these agencies in general obligated less to advertising contracts than top-spending agencies. Ten agencies with advertising contract obligations of at least $1 million over fiscal years 2013 through 2017, such as the departments of Justice and Energy, directed at least half of their obligations to specified businesses. With the exception of DHS, which obligated about $200 million for advertising contracts over the 5-year period, these agencies all obligated less than $25 million for advertising contracts over this timeframe. In contrast, DOD directed a relatively small share (5 percent) of its advertising contract obligations to specified businesses, making it 29th out of 37 agencies when ranked according to the percentage of advertising contract obligations going to these businesses. However, because the department obligated a large amount for advertising contracts ($2.6 billion over the 5-year period), it ranked second in terms of the amount obligated to specified businesses. Some agencies directed all or nearly all of their advertising contract obligations to specified businesses, but because these agencies’ advertising contract obligations were relatively low, the amounts they directed to these businesses were also relatively low. For example, the Nuclear Regulatory Commission directed all of its federal advertising contract obligations—totaling approximately $1 million—to specified businesses from fiscal years 2013 through 2017. Additionally, the National Aeronautics and Space Administration directed 98 percent of its approximately $21 million in advertising contract obligations to these businesses from 2013 through 2017. Table 6 in appendix II shows the top 20 agencies in terms of share of advertising contract obligations going to these businesses. Agency Comments We provided a draft of this report to the SBA Administrator for comment. SBA provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, SBA, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6806 or nguyentt@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology Our objectives were to identify and analyze (1) the amount federal agencies have obligated on advertising contracts over the 5 most recent fiscal years (2013 through 2017) and the amount going to small disadvantaged businesses (SDB) and those owned by minorities and women; and (2) the agencies that have directed the most advertising contract obligations to these businesses and how this has changed over time. To address both objectives, we analyzed data from the Federal Procurement Data System-Next Generation (FPDS-NG) database for fiscal years 2013 through 2017. This database captures information on the federal government’s contract awards and obligations and includes data for most federal contract actions that have an estimated value of more than $3,500. We reviewed obligations data for contracts coded under the “support – management: advertising” or “support – management: public relations” product service codes. For reporting purposes, we refer to these two contract types collectively as “advertising contracts.” Every contract action reported in FPDS-NG is categorized by a product service code to indicate what was purchased. Additionally, contracts reported in FPDS-NG are categorized by a North American Industry Classification System (NAICS) code, which indicates the industry within which the product or service falls. For purposes of this report, we used the product service codes mentioned above to identify advertising contracts because product service codes are assigned at the individual contract or order level. The Small Business Administration (SBA) uses NAICS codes to identify the predominant service or supply on a contract. NAICS codes are an integral element of size standards and the determination whether the business receiving the contract award is a small business. In addition to analyzing FPDS-NG data, we interviewed SBA officials responsible for assessing government-wide and agency contracting with small and other business categories about their perspectives on trends in federal contracting. We assessed the reliability of these data by considering known strengths and weaknesses of FPDS-NG data, based on our past work and looking for obvious errors and inconsistencies in the data we used for our analysis. We also interviewed SBA officials, who use FPDS-NG data in assessing federal contracting, about the database’s reliability. Based on these steps, we concluded that the data were sufficiently reliable for our purposes. We focused our analysis of FPDS-NG data on those advertising contracts categorized as being awarded to (1) SDBs, 8(a) businesses, or both; (2) business owned by minorities and/ or (3) businesses owned by women. SDBs, minority-owned, and women-owned businesses may self-identify in the government’s System for Award Management as these types of businesses. For purposes of this report, we refer to the three categories of businesses we examined as “specified businesses.” Criteria for certification as an 8(a) business are similar to those for SDB classification, including that businesses be primarily owned by a person or people who are socially and economically disadvantaged. In addition, 8(a) businesses must also demonstrate the potential for success and business principals must demonstrate good character. Because of these similarities, for analysis and reporting purposes we combined 8(a) businesses and SDBs into one group, which we refer to in this report as “small disadvantaged businesses” or “SDBs.” We interviewed SBA officials to obtain their perspectives on the changes, but did not attempt to identify root causes for changes over the past 5 years, as it was beyond our scope. We analyzed FPDS-NG data at the government-wide level to identify overall trends in obligations for advertising contracts and the amounts going to specified business categories. We focused on the amount of advertising contract obligations going to these business categories individually and combined, and examined how these amounts had changed over the past 5 fiscal years. Within the minority-owned business category, we also analyzed the amounts of obligations going to businesses owned by Asian-Pacific-, Subcontinent-Asian-, Black-, Hispanic-, and Native-Americans, and “other minority” owned businesses. Businesses self-identify as these categories in the federal government’s System for Award Management. We also examined data on the number of contracts and businesses receiving obligations through advertising contracts. There is overlap among the three specified business categories—SDBs and those owned by minorities and women. For example, a business may be classified as both an SDB and a women-owned business. We accounted for this overlap when calculating and presenting data on the amount of advertising contract obligations going to the three business categories combined so that we did not double or triple count obligations. We also analyzed FPDS-NG data on specific agencies’ obligations for advertising contract obligations and the amounts they obligated to specified businesses. We used these data to identify the agencies that ranked highest (in dollars and as a percentage of total advertising contract obligations) in advertising contract obligations to specified businesses. We also examined how agency obligations to these businesses have changed over the past 5 years. We conducted this performance audit from October 2017 to July 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Agencies’ Advertising Obligations to Specified Businesses Table 2 shows the amounts of advertising contract obligations that went to specified businesses over fiscal years 2013 through 2017. As shown, the amounts directed to these business categories generally increased both in dollars and as a percentage of total advertising obligations. Specified business categories each received at least $300 million in obligations over fiscal years 2013 through 2017. There were variations in the number of businesses receiving obligations and the concentration of obligations among contractors. Table 3 provides additional details on these characteristics. Table 4 below shows the agencies that obligated the most for advertising contracts overall, and those that obligated the most through these contracts to specified businesses. Specified businesses are those classified as small disadvantaged businesses (including those that self-identify as small disadvantaged businesses and those that are certified by SBA for the 8(a) business development program); minority-owned businesses; and women-owned businesses. Minority-owned businesses include those categorized as being owned by Asian-Pacific-, Sub-continent-Asian-, Black-, Hispanic-, and Native-Americans, as well as “other minorities.” Table 5 shows the 20 agencies that obligated the most for federal advertising contracts over fiscal years 2013 through 2017, with the percentages of these obligations going to specified businesses. Specified businesses are those classified as small disadvantaged businesses (including those that self-identify as small disadvantaged businesses, and those that are certified by SBA for the 8(a) business development program); minority-owned businesses; and women-owned businesses. Minority-owned businesses include those categorized as being owned by Asian-Pacific-, Sub-continent-Asian-, Black-, Hispanic-, and Native-Americans, as well as “other minorities.” Table 6 shows the 20 agencies that directed the greatest share of these obligations to specified businesses. Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments Other GAO staff who made contributions to this report include Carol Henn (Assistant Director); Ann Marie Cortez; Jenny Chanley; Kristine Hassinger; Julia Kennon; Kathleen Padulchick; and Erik Shive.
Why GAO Did This Study The federal government spends close to $1 billion annually for advertising activities that, among other things, inform the public about programs and services. The government seeks to provide procurement opportunities for these services to businesses such as SDBs and those owned by minorities and women. SDBs are those primarily owned by one or more socially and economically disadvantaged individuals. GAO was asked to analyze federal advertising obligations to these types of businesses. This report discusses (1) the amount federal agencies have obligated towards advertising contracts over the most recent 5 fiscal years (2013 through 2017) and the amount going to SDBs and businesses owned by minorities and women; and (2) the agencies that have directed the most advertising contract obligations to these businesses and how this has changed over time. GAO analyzed data on advertising contracts from the Federal Procurement Data System – Next Generation database for fiscal years 2013 through 2017. GAO also interviewed Small Business Administration officials. The Small Business Administration provided technical comments on this report, which GAO incorporated as appropriate. What GAO Found Federal advertising contract obligations to small disadvantaged businesses (SDB) and businesses of all sizes owned by minorities and women (specified businesses) generally increased from fiscal years 2013 through 2017, and constituted 13 percent of all advertising obligations over this period. This figure is consistent with the percentage of all federal contract obligations to these businesses over this period. Overall, advertising contract obligations to all three categories of businesses increased between fiscal years 2013 and 2017, as shown in the figure below. Within the minority-owned business category, which includes businesses owned by Asian-Pacific-, Subcontinent-Asian-, Black-, Hispanic-, and Native-Americans, over half of the obligations went to those owned by Hispanic-Americans. Three agencies—the departments of Defense (DOD), Health and Human Services, and Homeland Security—were responsible for nearly three-quarters of advertising contract obligations to the three categories of businesses from fiscal years 2013 through 2017. These agencies were associated with much of the increase in these obligations to specified businesses over the 5-year period. Although some agencies obligated higher shares of their advertising contract obligations to these businesses, they generally obligated fewer dollars than DOD and the two other agencies. For example, the National Aeronautics and Space Administration directed 98 percent of its obligations to these businesses, but the agency's total advertising contract obligations were $21 million over the 5-year period. DOD obligated $2.6 billion for these contracts over the same period.
gao_GAO-18-230
gao_GAO-18-230_0
Background The Defense Base Closure and Realignment Act of 1990, as amended, has governed the BRAC process since 1990. The law established the procedures for making recommendations for base closures and realignments and originally required DOD to submit a 6-year force- structure plan and base its closure and realignment decisions on that plan. For the 1991, 1993, and 1995 BRAC rounds, DOD performed a detailed capacity analysis based on extensive data-collection efforts to identify specific bases capable of accommodating additional forces to develop its proposed list of closures and realignments. In 1997, after DOD requested another BRAC round, Congress required DOD to submit a report on, among other things, the need for any additional BRAC rounds and an estimate of the amount of DOD’s excess capacity at the time. In 2001, when Congress authorized a BRAC round to begin in 2005, it required DOD to submit a force-structure plan to cover a 20-year period and an infrastructure inventory with its budget-justification documents for fiscal year 2005 before proceeding with the extensive data gathering efforts and analysis associated with the BRAC process. The submission was also to discuss categories of excess infrastructure and infrastructure capacity. Prior statutes included provisions for us to review DOD’s 1998 and 2004 excess capacity reports, which used a method to estimate excess capacity that was very similar to the method used in its 2017 report. Our 1998 and 2004 reports reviewed DOD’s 1998 and 2004 excess capacity reports, respectively. Our 2013 report assessed the estimating methods used in both the 1998 and 2004 excess capacity reports. In these three previous reports, we concluded that DOD’s methodology to estimate excess capacity had a number of limitations, and thus gave a rough indication that excess capacity existed. Specifically, we identified the following four limitations with the method used in DOD’s 1998 and 2004 reports: Installations were assigned to a single-mission category, yet most installations perform more than one mission. Military services used different metrics to evaluate installations in similar mission categories. DOD used a 1989 baseline that did not take into account any excess capacity or capacity shortfall that may have existed at the time. DOD’s analysis did not consider the possibility that a mission category might have a capacity shortage; mission categories were determined to have either an excess or no excess capacity. DOD agreed that our 2013 report properly highlighted the limitations in DOD’s methodology for estimating excess capacity. At that time, DOD reiterated that the purpose of its methodology is to provide an indication of whether sufficient excess exists to justify authorization of another BRAC round. DOD concluded that only through the BRAC process is it able to determine excess capacity by installation and mission or function in a fair and thorough way. A list of related GAO products is included at the end of this report. DOD’s 2017 Infrastructure Capacity Report Addressed or Partially Addressed the Required Elements DOD’s 2017 infrastructure capacity report addressed or partially addressed the five required elements from section 2815 of the NDAA for Fiscal Year 2016. As shown in table 1, DOD addressed four of the required elements and partially addressed one element. DOD’s report partially addressed the requirement to include a description of the infrastructure capacity required to support the force structure because the report describes only a small portion of the capacity needed. For example, in the case of Air Force large aircraft installations, the needed infrastructure was described in terms of the square yards of apron space needed to support the assigned aircraft, but did not describe other infrastructure needs such as aircraft hangars, maintenance facilities, and administrative space used by squadrons assigned to the installation. Similarly, in the case of Army maneuver installations, the needed infrastructure was described in terms of maneuver acres needed, but did not describe other infrastructure necessary to support assigned units. Consequently, the description of infrastructure needed does not provide DOD and Congress with a complete picture of the infrastructure needed to support the force structure at these major installations. However, as DOD points out in its report to Congress, the analysis performed does not provide the detail necessary to identify specific infrastructure for elimination; instead it provides an indicator of the categories of excess. DOD also stated that this level of detail is only provided through the formal BRAC process. Consequently, without a formal BRAC round, DOD does not have the details necessary to identify the total infrastructure necessary to support its current force structure. Therefore, we are not making any recommendations concerning this reporting requirement. DOD’s Excess Capacity Methodology and Analysis Has Limitations That Affect the Accuracy and Analytical Sufficiency of the Estimate DOD’s excess capacity methodology and analysis has limitations that affect the accuracy and analytical sufficiency of the estimate. Specifically, DOD’s use of a 1989 baseline for excess capacity results in inaccurate estimates of excess capacity; DOD’s methodology included assumptions that were not always reasonable; and DOD’s approach to estimating excess capacity is not always sufficient or implemented consistently across the military departments. DOD noted some of these same limitations in its 2017 infrastructure capacity report. DOD’s Use of 1989 Data as the Baseline for Its Excess Capacity Analysis Results in Inaccurate Estimates of Excess Capacity DOD’s use of 1989 data as the baseline for its excess capacity analysis resulted in inaccurate estimates of excess capacity. According to generally accepted research standards, listed in appendix I, the baseline and other data used to support the analysis should be determined to be reliable and valid. Specifically, the baseline should be fully and completely identified and used consistently, where appropriate. In addition, the data limitations should be identified and the effect of these limitations should be fully explained. DOD has also recognized that using 1989 as a baseline did not account for excess capacity that existed in 1989. However, DOD only partially explained the effect of this limitation on its estimate of excess capacity. First, using 1989 as the baseline assumes that the bases and facilities as they existed in 1989 were appropriately sized to support their missions. However, DOD’s 2017 infrastructure capacity report did not provide a rationale for either why 1989 was an appropriate baseline or why the bases and facilities were assumed to be appropriately sized at that time. In fact, as discussed below, DOD has stated that excess capacity existed in 1989, but does not attempt to quantify the amount. Further, in at least one mission category, Marine Corps Bases, DOD acknowledges that it overstated excess capacity because the baseline ratio was based on infrastructure numbers that were not adjusted to recognize the documented shortfalls that existed in 1989. Second, the effects of DOD’s assumptions about the 1989 baseline have not been consistently reported by DOD. DOD has used the same baseline in its three analyses conducted over the past 20 years, yet DOD draws different conclusions concerning how the baseline affects its estimates of excess capacity. For example, DOD concluded in 1998 that excess capacity existed in the 1989 baseline because the majority of realignment and closures took place after 1989; in 2004 that very significant excess capacity existed in the 1989 baseline; and in 2017, in DOD’s infrastructure capacity report, that the 1989 baseline was both properly sized to support assigned missions and forces and included significant excess capacity. Nevertheless, DOD has consistently stated that its estimate of excess capacity is likely conservative because significant excess existed in 1989. DOD also stated that its analysis provides an indicator of the categories where excess might exist and that only through a BRAC round can the department undertake the detailed analysis necessary to make closure and realignment recommendations. Since 1988, DOD has completed five BRAC rounds that have closed a significant number of DOD facilities. In addition, as discussed below, DOD facility standards and requirements have been updated and new weapon systems have been introduced, which can affect the amount and type of infrastructure needed. Consequently, without a definitive measure of the excess that existed in 1989, as well as adjustments in the method to account for the effect of updated facility standards and requirements, and new weapons systems, there is no clear rationale for using 1989 as a baseline year in the estimate of excess capacity provided by DOD’s analysis. Third, during the last 29 years DOD facility standards and requirements have been updated and new weapon systems with greater ranges and capabilities have been developed that have changed the amount and type of infrastructure needed to support DOD’s forces. For example, we recently reported that only 11 of the Navy’s 18 drydocks are configured to perform maintenance on the newer ship and submarine classes like the Ford-class aircraft carrier and Virginia-class submarine. Using such an old baseline, without making adjustments in the method to account for these changes, leads us to conclude that DOD’s results are likely inaccurate. Because DOD continues to use its outdated 1989 baseline we found that DOD’s 2017 excess capacity analysis results in estimates that are likely inaccurate. Without updating the baseline that is used in the methodology to calculate excess capacity across DOD, DOD will not have accurate information for making critical decisions related to investments in infrastructure. Furthermore, Congress will not have accurate information to make fully informed decisions concerning whether and to what extent another BRAC round is needed. DOD’s Methodology for Estimating Excess Capacity Includes Assumptions That Are Not Always Reasonable DOD’s excess capacity methodology includes assumptions that are not always reasonable, such as assigning installations to only one mission category. According to generally accepted research standards, reasonable assumptions are characterized by being realistic, credible, and accompanied by a statement of their rationale. In addition, these standards also state that assumptions should support a sound analysis (e.g., the assumptions should not skew the results of the analysis or reduce the range of possible outcomes). We previously reported limitations related to DOD’s assumptions when we examined DOD’s excess capacity analyses in 1998, 2004, and 2013. DOD continues to use the same methodology in 2017 that it has previously used to estimate excess capacity; thus, these limitations continue to exist in its methodology in its 2017 report. First, DOD’s approach of assigning an installation to only one mission category treats an installation as if it has only one mission, yet most installations support more than one mission. As a result, only a small portion of an installation’s infrastructure may be considered by DOD’s analysis. For example, in the case of Fort Bragg, North Carolina, which is included in the maneuver base category by the Army, base acres are included in the analysis, but more than 43.8 million square feet of infrastructure is not considered. Similarly, in the case of Naval Base Kitsap, Washington, which is included in the Naval Station category by the Navy, the pier space is considered in the analysis, but the more than 7.5 million square feet of facilities is not considered. In addition, as discussed later in this report, there were instances where the military departments included installations in more than one mission category. Finally, there are several categories that measure capacity in terms of direct labor hours or work- years, but the analysis does not include the actual infrastructure, such as buildings, structures, and linear structures. Consequently, the assumption that each installation is included in one mission category may not be reasonable because only a portion of the infrastructure at the installations is being considered when identifying potential excess capacity. Second, as implemented, DOD’s estimate of excess capacity may be overstated because its methodology did not account for any potential shortfalls in capacity—not having enough infrastructure to support the mission—and did not provide a rationale for this approach in its calculations. As illustrated in table 2, when DOD’s calculation identifies that the proportional capacity is less than the infrastructure capacity for the year being analyzed (i.e., DOD needs less infrastructure than it has), DOD concludes that excess capacity exists and provides a percentage amount of excess capacity. However, when the proportional capacity exceeds the infrastructure capacity for the year being analyzed (i.e., DOD may need more infrastructure), DOD concludes that no excess capacity exists. Moreover, DOD’s calculation provides a zero percentage for excess capacity, rather than a negative percentage that would account for a potential capacity shortfall in its analysis. DOD’s 2017 infrastructure capacity analysis identifies zero percent excess capacity in nearly half (14 of 32) of the installation categories that needed more capacity—included in the analysis, including 8 or 12 Navy installation categories. Because DOD’s methodology uses the excess capacity percentages from the 32 installation categories to compute a weighted average for excess capacity across the department, treating a negative percentage from a mission category as 0.0 percent would increase DOD’s overall excess capacity percentage. DOD officials believe that treating these 14 installation categories as if they have 0.0 percent excess capacity is appropriate because the purpose of the analysis is to identify the categories where excess capacity may exist. In addition, they asserted that treating these categories as if they had a shortfall would assume that infrastructure from 1 of the 18 other installation categories identified as having excess capacity could be used to offset the shortfall when the categories are likely to have different metrics. DOD officials also told us that, from their perspective, no increase does not mean that there is large deficit of infrastructure within a mission category; it just means that the infrastructure to force-structure ratio indicates that the particular category does not have excess. We found, however, 6 installation categories where the force-structure measure exceeds the capacity measure, which indicates that a shortfall exists. In addition, because most installations support more than one mission and have more infrastructure present than the mission category metric measures, including potential capacity shortfall in its analysis could provide DOD and Congress with a more accurate estimate of excess capacity. DOD’s methodology to estimate excess capacity includes assumptions that are not reasonable. Without using assumptions to estimate excess capacity that are considered reasonable (i.e., realistic, credible, and accompanied by a statement of their rationale), DOD’s methodology may overstate its estimate of excess capacity. DOD’s Method for Estimating Excess Capacity Is Not Always Sufficient or Implemented Consistently DOD’s method for estimating excess capacity across the department is not sufficient because it is based on a nongeneralizable sample and therefore its reported estimates cannot be generalized to describe excess capacity across the department. Furthermore, DOD’s sampling method is not always implemented effectively because some of the military departments adjusted the sampling approach. According to generally accepted research standards, the methods used and the analysis should be sufficient for accomplishing the objectives of the study. In addition, the analysis should be executed consistently with the study plan or the described methodology. We found that the calculations performed by DOD in the analysis were generally accurate. First, DOD and the military departments used a nongeneralizable sample of different types of installations to develop an excess capacity estimate. However, a nongeneralizable sample cannot be used to develop a department-wide estimate of excess capacity because this technique is not designed to yield a sound probable statistical estimate. Specifically, when the analysis was first done in 1998, the military departments sorted installations into categories and only included installations that were considered by the departments to be “major installations.” The departments were to assign each “major installation” to only one mission category. The departments were to then calculate the estimated capacity by mission category for both the baseline year, 1989, and the projected force-structure year, 2003. The same approach was used for the 2017 analysis; however, neither the 1998 nor the 2017 analysis provided guidance to the military department concerning what constitutes a “major installation.” This approach for selecting and sorting samples of installations relies on the judgment of each of the military departments, yielding a nongeneralizable sample of installations that vary across the military departments. Consequently, the results from the analysis cannot be used to make inferences about the amount of excess capacity across DOD. Second, the military departments did not follow a consistent approach when calculating excess capacity. Specifically, the DOD method bases its excess capacity estimate on the number of installations in each mission category. However, we found that, in the 2017 analysis, the military departments did not consistently follow the practice of including installations in only one category across the services when the analysis was performed in 2017. For example, we found several installations that were included in more than one category by some of the military departments: In the 2017 analysis, the Air Force included two subcategories under the heading of “Education and Training”: “Flight Training” and “Classroom.” The flight training subcategory included 13 installations and the classroom subcategory included 14 installations. We found that all 13 of the flight training installations were also included as classroom installations. Yet, when the analysis was performed in both 1998 and 2004, the same 14 installations were used, but 8 of the installations were then categorized as being flight training installations and the other 6 installations were categorized as classroom installations. If this previous categorization approach was used in the 2017 analysis, the Air Force estimate of excess capacity would have been about 2 percent lower. In two instances, the Navy included the same installations in both the “Naval Station” and “Air Station” categories and, in one instance, the Navy included a joint base in both the “Naval Station” and “Shipyards” categories. According to a Navy official, these installations were included in both categories because a major mission would have been omitted from the analysis if the bases were included in only one category. This treatment, however, is not consistent with DOD’s methodology. Including the same installation in multiple installation categories may have resulted in double counting of capacity, and thereby affected the resulting estimate of excess capacity for multiple installation categories. Third, the military departments did not consistently account for the joint bases in their excess capacity analysis. In some instances, we found that only the lead military department included the joint base in its analysis. For example, in the case of Joint Base Lewis-McChord, Washington—an Army-led joint base comprising Fort Lewis and McChord Air Force Base—the Army, consistent with its treatment of Fort Lewis in previous excess capacity analyses, included the joint base in its maneuver category. However, the Air Force did not include McChord Air Force Base in its analysis in 2017 although it had in previous years. In these instances where only the lead military department included the joint base in its analysis, the infrastructure associated with the tenant military department was usually left out of the analysis because the metric used by the leading department does not incorporate the same measures of infrastructure and force structure as the tenant department. In the Joint Base Lewis-McChord example, the Army included the base in the maneuver category, which is measured by the ratio of maneuver acres to maneuver battalion equivalents while the Air Force had previously used the ratio of parking apron space to number of aircraft to measure capacity at McChord Air Force Base. Consequently, DOD’s analysis no longer takes into account the infrastructure that supports the flying mission at this joint base. In other instances, we found that both the lead military department and the tenant military department included their portion of the infrastructure in their analyses. For example, for Joint Base Charleston, South Carolina—an Air Force-led joint base comprised of Charleston Air Force Base and Naval Support Activity Charleston—each of the military departments continued to include their portion of the infrastructure in their individual analyses. Consequently, DOD’s analysis accounts for the infrastructure that supports both missions at the joint base. DOD’s method for estimating excess capacity is not always sufficient and is not implemented consistently across the military departments because DOD lacks specific department-wide guidance, according to DOD officials. Specifically, explicit guidance does not exist that clearly defines “major installations,” identifies whether and when it is appropriate to include a facility in more than one category to take into account multiple missions at the facilities, or provides protocols for assessing excess capacity at joint bases. These topics were discussed in meetings with military department officials, but, according to DOD officials, no specific method was identified for department-wide use. Without developing guidance for the military departments, the estimate of excess capacity may not be based on consistent methods across the department, resulting in inaccurate estimates. Conclusions DOD’s 2017 excess capacity analysis does not have the accuracy and analytical sufficiency to provide Congress with a reasonable estimate of the actual excess capacity within the department. DOD recognizes the limitations of its analysis, specifically noting that the resulting percentages of excess capacity are at best indicators to justify the more detailed analysis of excess capacity provided by a full BRAC analysis. Specifically, DOD used a baseline for the analysis that did not fully take into account changes in infrastructure needs since 1989, used assumptions in its analysis that are not reasonable, and used methods that were not sufficient or implemented consistently. These limitations resulted in excess capacity estimates that do not have the accuracy and analytical sufficiency to support decision making on future BRAC rounds. Without improvements to DOD’s method of estimating excess capacity, DOD is not providing the information that Congress requires to make decisions concerning the management of excess infrastructure capacity within the department. Similarly, DOD does not have the information it needs to appropriately manage its infrastructure capacity and therefore cannot make informed decisions about what it needs to support its mission as land and infrastructure requirements of newer weapon systems are introduced. Moreover, the combined effect of neither DOD nor Congress having the information means that DOD will continue to experience challenges with funding related to its infrastructure and potential excess costs. Recommendations for Executive Action We are making the following three recommendations to DOD: The Secretary of Defense should ensure that the Assistant Secretary of Defense for Energy, Installations, and Environment reliably updates the baseline used for estimating excess infrastructure capacity. (Recommendation 1) The Secretary of Defense should ensure that the Assistant Secretary of Defense for Energy, Installations, and Environment uses assumptions in estimating excess capacity that are considered reasonable (i.e., realistic, credible, and accompanied by a statement of their rationale). (Recommendation 2) The Secretary of Defense should ensure that the Assistant Secretary of Defense for Energy, Installations, and Environment develops guidance to improve the methods used in the analysis and ensure consistent implementation of DOD’s methodology to produce reliable estimates of excess capacity across the department. The guidance, at a minimum, should clearly define “major installations,” identify whether and when it is appropriate to include a facility in more than one category to take into account multiple missions at the facilities, and provide protocols for assessing excess capacity at joint bases. (Recommendation 3) Agency Comments and Our Evaluation We provided a draft of this report to the Department of Defense (DOD) for comment. DOD provided written comments, which are reproduced in appendix II. DOD concurred with one recommendation and partially concurred with the other two recommendations. DOD stated that it concurred with our first recommendation, which called for it to reliably update the baseline used for estimate excess infrastructure capacity. Specifically, the department stated that it would review methods to update the baseline for future excess capacity analysis that is undertaken. The department partially concurred with our second recommendation, which called for the department to use assumptions that were considered reasonable (i.e. realistic, credible, and accompanied by a statement of the rationale) in estimating excess capacity. Specifically, the department agreed that its capacity report should lay out any assumptions made and the rationale for each assumption and will ensure that any future capacity report includes that information. The department did not concur, however, that assumptions used in its 2017 infrastructure capacity report were other than reasonable, realistic, or credible. While we are encouraged that the department will lay out any assumptions and the rationale for each assumption in future capacity reports, not all assumptions used in the 2017 analysis were reasonable (i.e. realistic, credible, and accompanied by a statement of the rationale) as outlined in this report. For example, we found that assigning installations to only one mission category was not realistic because most installations support more than one mission. The department partially concurred with our third recommendation that DOD develop guidance to improve the methods used in the analysis and ensure consistent implementation of DOD’s methodology to produce reliable estimates of excess capacity across the department. This guidance, at a minimum, should clearly define “major installations,” identify whether and when it is appropriate to include a facility in more than one category to take into account multiple missions at the facilities, and provide protocols for assessing excess capacity at joint bases. DOD concurred that guidance should precede any future infrastructure capacity review and that such guidance should include definitions and implementation instructions, but the three items identified would not necessarily be applicable for a future analysis. Provided that future DOD guidance addresses all appropriate characteristics for analysis, such guidance would meet the intent of our recommendation. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Secretaries of the Army, Navy, and Air Force; and the Assistant Secretary of Defense for Energy, Installations, and Environment. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4523 or leporeb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Appendix I: Generally Accepted Research Standards Relevant to DOD’s Infrastructure Capacity Report Table 3 describes the generally accepted research standards, identifies the standards we used in evaluating the quality of the research results conveyed in DOD’s 2017 infrastructure capacity report and provides the rationale for the inclusion and exclusion of each specific standard. Study plan, scope, and objectives follow existing guidance? Do the study scope and objectives fully address the mandated elements? Does the study plan address specified guidance? Team is not aware of any standard guidance for the development of this document. Was the study plan followed? Team is not aware of any study plan that guided the development of this report. Were deviations from the study plan explained and documented? Team is not aware of any study plan that guided the development of this report. Was the study plan updated over the course of the study and the updates explicitly identified in the study and updated study plan? Assumptions and limitations are reasonable and, where appropriate, consistent Are assumptions and limitations explicitly identified? Team is not aware of any study plan that guided the development of this report. Given the judgment required to execute the analyses the assumptions and constraints are key to team’s determination of the accuracy and analytical sufficiency of the report. Generally Accepted Research Standards II.a.1 Are the assumptions reasonable in that they are realistic, credible, and accompanied by a statement of their rationale? Rationale for inclusion in or exclusion from GAO’s review Given the judgment required to execute the analyses, the assumptions and constraints are key to the team’s determination of the accuracy and analytical sufficiency of the report. Team felt that ‘reasonable’ was sufficient and ‘necessary’ was not readily apparent. Do the assumptions support a sound analysis? Given the judgment required to execute the analyses the assumptions and constraints are key to team’s determination of the accuracy and analytical sufficiency of the report. Are the assumptions used in analyses common throughout the study and models? This standard is not needed to answer the objectives of our report. Other standards for study assumptions are more relevant and sufficient for our purposes. Do the assumptions contribute to an objective and balanced research effort? Scenarios and threats are reasonable Did they synthesize the supporting analyses such that it is traceable back to formal guidance? Were the threat scenarios validated and Joint Staff approved and documented? Do scenarios represent a reasonably complete range of conditions? Were the threats varied to allow for the conduct of sensitivity analysis? Methods are sufficient and successfully executed Were the study methods executed consistent with the study plan and schedule? Were the methods and analyses sufficient for accomplishing the objectives presented in the study? Given the judgment required to execute the analyses the methodology is key to determine if DOD accomplishes its objectives. Generally Accepted Research Standards IV.c Were the models used to support the analyses adequate for their intended purpose? //Were the calculations used to support the analyses accurate? Baseline and other data used to support the analyses were determined to be reliable and valid? Is the baseline fully and completely identified and used consistently, where appropriate, throughout the various analyses? Rationale for inclusion in or exclusion from GAO’s review Important to ensure the model is designed well in addition to accurate arithmetic calculations. DOD conducted analyses and calculations in the report. V DOD report includes the use of baseline data in the underlying analyses. Were data limitations identified and the impact of the limitations fully explained? DOD report uses data obtained from DOD components. Were the data determined to be reliable and valid? Incorporated with V.e below. Were the data reliability and validation process documented? DOD report uses data obtained from DOD components. Were the appropriate data gathered to support the analyses? OSD obtained data from other DOD components and used it to generate the report. Analyses are reasonable Was a verification, validation, and accreditation report that addresses the models and data certification signed by the study director and included in the report? In the context of our engagement, redundant with section II above. Were analytic limitations identified and explained? In the context of our engagement, redundant with section II above. Has each analysis in the study been described? In the context of our engagement, redundant with section II above. Were the analyses clearly explained, documented? Measures of effectiveness (MOEs) and essential elements of analysis (EEAs) are addressed Do MOEs adhere to the guidance in the study terms of reference? The mandate language does not require DOD to include measures of effectiveness in its report. Furthermore, DOD is not required to submit a strategic plan so Government Performance and Results Act requirements are not applicable. Are the MOEs fully addressed in the study? Same rationale cited above. Are the EEAs addressed in the study? Same rationale cited above. Generally Accepted Research Standards Standard used in GAO’s review? Presentation of results support findings Does the report address the objectives? Does the report present an assessment that is well documented and conclusions that are supported by the analyses? Are conclusions sound and complete? We will address conclusionary language in the context of the data used to support it above. Are recommendations supported by analyses? The mandate language does not require DOD to include recommendations and DOD did not include recommendations. Is a realistic range of options provided? Not applicable. DOD’s report does not include range of options for force- structureplans and categorical infrastructure inventory. Are the study results presented in the report in a clear manner? Are study participants/stakeholders (i.e., services and Combatant Commands) informed of the study results and recommendations? Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Gina Hoffman (Assistant Director), Tracy Barnes, Ronald Bergman, Patricia Donahue, Kerstin Hudon, Terrance Lam, Amie Lesser, Carol Petersen, Clarice Nassif Ransom, Matt Spiers, Tristan To, and John Wren made key contributions to this report. Related GAO Products High-Risk Series: Progress on Many High-Risk Areas, While Substantial Efforts Needed on Others. GAO-17-317. Washington, D.C.: February 15, 2017. Military Base Realignments and Closures: DOD Has Improved Environmental Cleanup Reporting but Should Obtain and Share More Information. GAO-17-151. Washington, D.C.: January 19, 2017. Defense Infrastructure: DOD Efforts to Prevent and Mitigate Encroachment at Its Installations. GAO-17-86. Washington, D.C.: November 14, 2016. Defense Facility Condition: Revised Guidance Needed to Improve Oversight of Assessments and Ratings. GAO-16-662. Washington, D.C.: June 23, 2016. Defense Infrastructure: More Accurate Data Would Allow DOD to Improve the Tracking, Management, and Security of Its Leased Facilities. GAO-16-101. Washington, D.C.: March 15, 2016. Underutilized Facilities: DOD and GSA Information Sharing May Enhance Opportunities to Use Space at Military Installations. GAO-15-346. Washington, D.C.: June 18, 2015. Military Base Realignments and Closures: More Guidance and Information Needed to Take Advantage of Opportunities to Consolidate Training. GAO-16-45. Washington, D.C.: February 18, 2016. Military Base Realignment and Closures: Process for Reusing Property for Homeless Assistance Needs Improvements. GAO-15-274. Washington, D.C.: March 16, 2015. High-Risk Series: An Update. GAO-15-290. Washington, D.C.: February 11, 2015. Federal Real Property: Strategic Focus Needed to Help Manage Vast and Diverse Warehouse Portfolio. GAO-15-41. Washington, D.C.: November 12, 2014. DOD Joint Bases: Implementation Challenges Demonstrate Need to Reevaluate the Program. GAO-14-577. Washington, D.C.: September 19, 2014. Defense Infrastructure: DOD Needs to Improve Its Efforts to Identify Unutilized and Underutilized Facilities. GAO-14-538. Washington, D.C.: September 8, 2014. Defense Infrastructure: Army Brigade Combat Team Inactivations Informed by Analyses, but Actions Needed to Improve Stationing Process. GAO-14-76. Washington, D.C.: December 11, 2013. Military Bases: DOD Has Processes to Comply with Statutory Requirements for Closing or Realigning Installations. GAO-13-645. June 27, 2013. Defense Infrastructure: DOD’s Excess Capacity Estimating Methods Have Limitations. GAO-13-535. Washington, D.C.: June 20, 2013. Military Bases: Opportunities Exist to Improve Future Base Realignment and Closure Rounds. GAO-13-149. Washington, D.C.: March 7, 2013. GAO’s 2013 High Risk Series: An Update. GAO-13-283. Washington, D.C.: February 2013. DOD Joint Bases: Management Improvements Needed to Achieve Greater Efficiencies. GAO-13-134. Washington, D.C.: November 15, 2012. Military Base Realignments and Closures: The National Geospatial- Intelligence Agency’s Technology Center Construction Project. GAO-12-770R. Washington, D.C.: June 29, 2012. Military Base Realignments and Closures: Updated Costs and Savings Estimates from BRAC 2005. GAO-12-709R. Washington, D.C.: June 29, 2012. Military Base Realignments and Closures: Key Factors Contributing to BRAC 2005 Results. GAO-12-513T. Washington, D.C.: March 8, 2012. Excess Facilities: DOD Needs More Complete Information and a Strategy to Guide Its Future Disposal Efforts. GAO-11-814. Washington, D.C.: September 19, 2011. Military Base Realignments and Closures: Review of the Iowa and Milan Army Ammunition Plants. GAO-11-488R. Washington, D.C.: April 1, 2011. GAO’s 2011 High-Risk Series: An Update. GAO-11-394T. Washington, D.C.: February 17, 2011. Defense Infrastructure: High-Level Federal Interagency Coordination Is Warranted to Address Transportation Needs beyond the Scope of the Defense Access Roads Program. GAO-11-165. Washington, D.C.: January 26, 2011. Military Base Realignments and Closures: DOD Is Taking Steps to Mitigate Challenges but Is Not Fully Reporting Some Additional Costs. GAO-10-725R. Washington, D.C.: July 21, 2010. Defense Infrastructure: Army Needs to Improve Its Facility Planning Systems to Better Support Installations Experiencing Significant Growth. GAO-10-602. Washington, D.C.: June 24, 2010. Military Base Realignments and Closures: Estimated Costs Have Increased While Savings Estimates Have Decreased Since Fiscal Year 2009. GAO-10-98R. Washington, D.C.: November 13, 2009. Military Base Realignments and Closures: Transportation Impact of Personnel Increases Will Be Significant, but Long-Term Costs Are Uncertain and Direct Federal Support Is Limited. GAO-09-750. Washington, D.C.: September 9, 2009. Military Base Realignments and Closures: DOD Needs to Update Savings Estimates and Continue to Address Challenges in Consolidating Supply- Related Functions at Depot Maintenance Locations. GAO-09-703. Washington, D.C.: July 9, 2009. Defense Infrastructure: DOD Needs to Periodically Review Support Standards and Costs at Joint Bases and Better Inform Congress of Facility Sustainment Funding Uses. GAO-09-336. Washington, D.C.: March 30, 2009. Military Base Realignments and Closures: DOD Faces Challenges in Implementing Recommendations on Time and Is Not Consistently Updating Savings Estimates. GAO-09-217. Washington, D.C.: January 30, 2009. Military Base Realignments and Closures: Army Is Developing Plans to Transfer Functions from Fort Monmouth, New Jersey, to Aberdeen Proving Ground, Maryland, but Challenges Remain. GAO-08-1010R. Washington, D.C.: August 13, 2008. Defense Infrastructure: High-Level Leadership Needed to Help Communities Address Challenges Caused by DOD-Related Growth. GAO-08-665. Washington, D.C.: June 17, 2008. Defense Infrastructure: DOD Funding for Infrastructure and Road Improvements Surrounding Growth Installations. GAO-08-602R. Washington, D.C.: April 1, 2008. Military Base Realignments and Closures: Higher Costs and Lower Savings Projected for Implementing Two Key Supply-Related BRAC Recommendations. GAO-08-315. Washington, D.C.: March 5, 2008. Defense Infrastructure: Realignment of Air Force Special Operations Command Units to Cannon Air Force Base, New Mexico. GAO-08-244R. Washington, D.C.: January 18, 2008. Military Base Realignments and Closures: Estimated Costs Have Increased and Estimated Savings Have Decreased. GAO-08-341T. Washington, D.C.: December 12, 2007. Military Base Realignments and Closures: Cost Estimates Have Increased and Are Likely to Continue to Evolve. GAO-08-159. Washington, D.C.: December 11, 2007. Military Base Realignments and Closures: Impact of Terminating, Relocating, or Outsourcing the Services of the Armed Forces Institute of Pathology. GAO-08-20. Washington, D.C.: November 9, 2007. Military Base Realignments and Closures: Transfer of Supply, Storage, and Distribution Functions from Military Services to Defense Logistics Agency. GAO-08-121R. Washington, D.C.: October 26, 2007. Defense Infrastructure: Challenges Increase Risks for Providing Timely Infrastructure Support for Army Installations Expecting Substantial Personnel Growth. GAO-07-1007. Washington, D.C.: September 13, 2007. Military Base Realignments and Closures: Plan Needed to Monitor Challenges for Completing More Than 100 Armed Forces Reserve Centers. GAO-07-1040. Washington, D.C.: September 13, 2007. Military Base Realignments and Closures: Observations Related to the 2005 Round. GAO-07-1203R. Washington, D.C.: September 6, 2007. Military Base Closures: Projected Savings from Fleet Readiness Centers Likely Overstated and Actions Needed to Track Actual Savings and Overcome Certain Challenges. GAO-07-304. Washington, D.C.: June 29, 2007. Military Base Closures: Management Strategy Needed to Mitigate Challenges and Improve Communication to Help Ensure Timely Implementation of Air National Guard Recommendations. GAO-07-641. Washington, D.C.: May 16, 2007. Military Base Closures: Opportunities Exist to Improve Environmental Cleanup Cost Reporting and to Expedite Transfer of Unneeded Property. GAO-07-166. Washington, D.C.: January 30, 2007. Military Bases: Observations on DOD’s 2005 Base Realignment and Closure Selection Process and Recommendations. GAO-05-905. Washington, D.C.: July 18, 2005. Military Bases: Analysis of DOD’s 2005 Selection Process and Recommendations for Base Closures and Realignments. GAO-05-785. Washington, D.C.: July 1, 2005. Military Base Closures: Observations on Prior and Current BRAC Rounds. GAO-05-614. Washington, D.C.: May 3, 2005. Military Base Closures: Assessment of DOD’s 2004 Report on the Need for a Base Realignment and Closure Round. GAO-04-760. Washington, D.C.: May 17, 2004. Military Bases: Review of DOD’s 1998 Report on Base Realignment and Closure. GAO/NSIAD-99-17. Washington, D.C.: November 13, 1998.
Why GAO Did This Study DOD has used the Base Realignment and Closure (BRAC) process primarily to reduce excess infrastructure capacity, transform the force, and produce cost savings. DOD completed hundreds of base closures and realignments in previous BRAC rounds and intends to work with Congress to address remaining excess capacity. The NDAA for Fiscal Year 2016 required DOD to submit, among other things, a force structure plan and a categorical infrastructure inventory of worldwide military installations. In response, DOD submitted its infrastructure capacity report to Congress in October 2017. The NDAA included a provision for GAO to evaluate DOD's report for accuracy and analytical sufficiency. In this report, GAO evaluates the extent to which (1) DOD's report included the required elements, and (2) DOD's methodology and analysis result in accurate and analytically sufficient information on excess capacity. To conduct this work, GAO reviewed DOD's 2017 report and compared it with the statutory requirements and generally accepted research standards. GAO also interviewed DOD and military service officials. What GAO Found The Department of Defense's (DOD) 2017 infrastructure capacity report addressed four of five required elements from section 2815 of the National Defense Authorization Act (NDAA) for Fiscal Year 2016. Specifically, DOD's report addressed the elements requiring it to submit a force-structure plan, a categorical inventory of worldwide military installations, a discussion of categories of excess infrastructure, and an assessment of the value of retaining certain excess infrastructure. DOD's report partially addressed the element to include a description of the infrastructure capacity required to support the force structure. Specifically, DOD's report did not provide a complete picture of the infrastructure needed. For example, infrastructure at Air Force large aircraft installations was described by square yards of apron space, but did not include other infrastructure needs such as aircraft hangars and maintenance facilities. DOD's excess capacity methodology and analysis has three key limitations that affect the accuracy and analytical sufficiency of the estimate. Specifically: DOD used a 1989 baseline for excess capacity that may lead to inaccurate results. This 1989 baseline does not reflect updates in DOD facility standards and requirements or requirements associated with new weapon systems. DOD's excess capacity methodology includes assumptions, such as not accounting for potential shortfalls—not having enough infrastructure to support the mission—that may not be reasonable. Specifically, when DOD's calculation identifies shortfall in capacity, DOD concludes that no excess capacity exists. As a result, DOD's analysis identifies no excess capacity in nearly half (14 of 32) mission categories. However, most installations support more than one mission and have more infrastructure present than the installation category metric measures. Thus, including potential capacity shortfalls could provide DOD and Congress with a more accurate estimate of excess capacity upon which to base decisions concerning the management of base infrastructure and excess capacity. DOD's method for estimating excess capacity is not always sufficient because the installation selection process does not result in a generalizable sample. Furthermore, DOD's method is not always implemented effectively because the military departments did not follow a consistent approach. According to DOD officials, specific department-wide guidance concerning DOD's methods for selecting installations in its analysis does not exist. Moreover, without developing guidance, the estimate of excess capacity may not be based on consistent methods across the department, resulting in inaccurate estimates. Furthermore, neither DOD nor Congress will have the necessary information to make decisions concerning the management of excess infrastructure capacity across the department. What GAO Recommends GAO is making three recommendations to DOD to update the baseline; use reasonable assumptions; and develop guidance to improve its methods for estimating excess capacity. In comments on a draft of this report, DOD concurred with one recommendation, partially concurred with two recommendations, and plans to incorporate them in any future capacity analysis.
gao_GAO-18-172
gao_GAO-18-172_0
Background Following the terrorist attacks of September 11, 2001, Congress passed the Aviation and Transportation Security Act which created TSA as the federal agency responsible for security in all modes of transportation, including civil aviation. Among its responsibilities, TSA must generally ensure that all passengers and property are screened before being transported on a commercial passenger aircraft. This statute also provided TSA the authority to enter into OTAs. TSA defines an OTA as a set of legally enforceable promises between TSA and another party that is other than a procurement contract, grant, cooperative agreement, lease, or loan. Every agency has inherent authority to enter into contracts to procure goods or services for its own use; however, agencies must receive specific authority to award OTAs. Under these authorities, agencies may develop agreements that do not follow a standard format or include terms and conditions that are typically required when using traditional mechanisms such as FAR-based contracts. Agreements entered into using other transaction authority are not generally subject to certain statutory and regulatory requirements related to government contracting such as the FAR and the terms and conditions of each individual OTA may be tailored to meet the specific situation. For example, OTAs may be fixed-price, cost-reimbursable, or provide that each party bear the costs of their participation. In addition, the length of an OTA is negotiable, with some agreements lasting a few days and others for years. As we reported in 2016, Congress has granted other transaction authority to 11 federal agencies. The statutory authorities for most agencies, however, include some limitations on the use of the agreements, although the extent and type of limitations vary. We found that most of the 11 agencies used OTAs for two purposes: (1) research, development, and demonstration; and (2) prototype development. Three agencies—the Federal Aviation Administration, TSA, and the National Aeronautics and Space Administration—used OTAs for different activities, such as airport security and education and outreach. Only a few agencies, including TSA and the National Aeronautics and Space Administration, have unrestricted authority to award OTAs. We also found that 9 of the 11 agencies had fewer than 90 active OTAs per fiscal year, but that, in contrast, TSA and the National Aeronautics and Space Administration had hundreds, and thousands, respectively. TSA’s OTA Policy TSA’s Office of Contracting and Procurement established policy and procedures for the use, award, and oversight of OTAs in 2011. Prior to 2011, TSA had no governing policy for OTAs. According to TSA’s policy, which has been revised several times since its inception, OTAs are best suited for situations where: an entity is not a traditional contracting partner, for example, airlines, airport authorities, trade associations, quasi-governmental entities, or research and development organizations; there are cost sharing mechanisms that require the recipient to contribute to the overall cost of the effort; or the recipient must recoup all costs through third-party user-fees. Further, the policy states that OTAs may not be used when the principal purpose of the agreement is to acquire (by purchase, lease, or barter) property or services for the direct benefit or use of the United States government. Table 2 identifies some of the key provisions of TSA’s OTA policy. This framework for awarding and overseeing OTAs is similar to those for contracts. Further, according to TSA’s OTA policy, contracting officers who award OTAs must be certified at Federal Acquisition Certification in Contracting Level III and demonstrate possession of a level of experience, responsibility, business acumen, and judgment that enables them to operate in the relatively unstructured business environment of the OTA. TSA Obligates Millions Annually through OTAs, Primarily to Reimburse for Costs Associated with TSA Security Programs From fiscal years 2012 through 2016, TSA reported obligating millions annually through OTAs, which amounted to at least $1.4 billion, or about 13 percent of its overall obligations during this time. Five TSA reimbursement programs used OTAs to partially or fully reimburse airports and law enforcement agencies for the allowable costs associated with TSA security programs such as the design and construction of checked baggage inline systems. These five reimbursement programs accounted for about 99 percent of the $1.1 billion that TSA obligated on OTAs that were awarded during this period. The remaining three non- reimbursement programs accounted for a small amount of obligations and awarded a low number of OTAs for services including intelligence analysis and the development of aviation standards. TSA Obligates Millions Annually through OTAs From fiscal year 2012 to 2016, TSA reported obligating millions annually through OTAs, amounting to at least $1.4 billion, or about 13 percent of its overall obligations through contracts and OTAs. Annual OTA obligations remained fairly stable over this period, except for fiscal year 2013 when obligations spiked and then sharply declined in fiscal year 2014. This spike was driven in large part by the Electronic Baggage Screening Program, which obligated $519 million on 54 OTAs in fiscal year 2013 but obligated only $4 million on one OTA in fiscal year 2014. See table 3 for TSA’s obligations on contracts and OTAs. TSA Primarily Uses OTAs to Reimburse Airports and Law Enforcement Agencies for the Costs Associated with Security Programs From fiscal year 2012 to 2016 eight TSA programs used OTAs to meet a variety of mission requirements. Five reimbursement programs used OTAs to partially or fully reimburse airports and law enforcement agencies for the allowable costs associated with TSA security programs. This accounted for about 99 percent of all OTA awards and obligations from fiscal year 2012 to 2016. The remaining three non-reimbursement programs accounted for a small amount of obligations and awarded a low number of OTAs for services including intelligence analysis and the development of aviation standards. See table 4 for the number of OTA awards and obligations by program. For more information on the programs and OTAs we reviewed, see appendix I. The five reimbursement programs awarded numerous OTAs to different airports and law enforcement agencies for similar requirements. These programs each used a class determination and findings that describes the general requirement and other parameters such as a range of possible award amounts or periods of performance. TSA has an OTA template with standard provisions. Terms tailored to the specific airport or law enforcement agency are then provided in the individual OTAs. The following examples illustrate some of the ways TSA has used OTAs to reimburse airports and law enforcement agencies for the costs associated with TSA security programs. The Electronic Baggage Screening Program is an acquisition program that tests, procures, deploys, and maintains checked baggage screening equipment at federalized airports. TSA uses FAR-based contracts to buy things like explosives detection machines and engineering support services. TSA uses OTAs to reimburse airports for the allowable design and construction costs associated with facility modifications needed for installing, updating, or replacing in-line checked baggage screening systems. These systems use conveyor belts to route checked luggage through an explosives detection machine which captures an image of the checked bag to determine if the bag contains any type of threat item including explosives. Agreements generally range in value from $50,000 to $150 million, and the anticipated period of performance can range from 6 months to 3 years, depending on the size and complexity of the project. In one example, TSA entered into an OTA to reimburse the City of Cleveland about $24 million for work at Cleveland Hopkins International Airport for installation of explosive detection systems within the checked baggage screening area. The Law Enforcement Officer Reimbursement Program provides partial salary reimbursement to approximately 325 airports to offset the costs of carrying out aviation law enforcement responsibilities in support of passenger screening activities. Reimbursement is based on an established “not-to-exceed” hourly rate or the actual cost per hour, whichever is lower. Agreements range in value depending on the airport category, the number of checkpoints and law enforcement officers, hours of operation, and availability of funds. The period of performance for these agreements is generally 3 to 5 years. For example, TSA entered into an agreement with the Dallas/Fort Worth International Airport Board that lasted from October 2012 to March 2016 to reimburse the airport about $5.5 million. While the five reimbursement programs awarded numerous OTAs for the same purpose to different airports and law enforcement agencies, the remaining three non-reimbursement programs awarded few OTAs and their use was more varied. Specifically, the Office of Security Policy and Industry Engagement, the Office of Law Enforcement/Federal Air Marshal Service, and the Office of Global Strategies used OTAs for a range of services including intelligence analysis and the development of aviation standards. For example: The Office of Security Policy and Industry Engagement is responsible for developing security policies to reduce the risk of catastrophic terrorist attacks. From fiscal year 2012 to 2016, the office awarded four OTAs. These included two awards to the American Public Transportation Association to meet ongoing requirements for intelligence gathering, public transit information sharing and analysis, and the development of mass transit and passenger rail security practices. The Office of Law Enforcement/Federal Air Marshal Service awarded 13 OTAs to pay for parking for federal air marshals and authorized Law Enforcement Office employees at airports including John F. Kennedy International and Washington Dulles International. However, in September 2016, TSA competitively awarded a contract to manage parking expenses at numerous airports. According to officials, parking requirements for the Office of Law Enforcement/Federal Air Marshal Service will be met through the contract and as a result, existing OTAs for this requirement are being phased out. Other than the parking OTAs, TSA officials noted that the requirements for the seven remaining programs that used OTAs from fiscal year 2012 to 2016 are ongoing and that TSA will continue to use OTAs for the same purposes in fiscal year 2017 and beyond, contingent on available funding. They also noted that they do not anticipate any new uses of OTAs. Methods to Price and Monitor Selected OTAs Reviewed Varied, and TSA Has Taken Action to Strengthen Oversight Our review of 29 OTAs awarded by 8 TSA programs from fiscal years 2012 through 2016 found that the methods used to determine price reasonableness and monitor these OTAs varied based on the complexity of the requirement. Further, for the key areas we reviewed, the OTAs generally met the requirements of TSA’s policy. Nonetheless, TSA’s own 2015 internal compliance review found significant gaps in OTA documentation and reporting. In response to these deficiencies, TSA has taken action to strengthen oversight and compliance with its policy. Methods to Determine Price Reasonableness and Monitor OTAs Varied by Program TSA’s OTA policy requires contracting officers to determine that the price negotiated under the OTA is reasonable and to appoint a COR to provide monitoring and a range of administration tasks to ensure that requirements are satisfactorily delivered. For the 29 OTAs we reviewed, we found that the methods used to determine price reasonableness and provide monitoring varied based on the complexity of the requirement. Approaches to determining price reasonableness ranged from instances where TSA extensively evaluated proposed costs to more straightforward analysis. For OTAs awarded by the Electronic Baggage Screening Program where the requirements for infrastructure design and construction can be complex, the program produces an independent government cost estimate based on design drawings and specifications from the airports which are required to follow TSA’s detailed guidance. The program compares the estimate with the airport authority’s independent bid for the design and construction. Any discrepancies are noted in the technical evaluation, which the contracting officer reviews and documents in the business clearance memorandum. For example, in fiscal year 2016, TSA awarded an OTA for $23 million to the City of Chicago for the recapitalization of the checked baggage resolution area at O’Hare International Airport. Certain proposed costs in the contractor’s bid were higher than TSA’s independent government cost estimate. The contracting officer performed an evaluation of the costs and determined that they were reasonable and that the difference was, in part, the result of the airport having greater familiarity with the existing conditions at the site than TSA’s cost estimators. By contrast, some programs took a more straightforward approach to determining price reasonableness, including cases where the costs were predetermined or not negotiable. For example, the Checkpoint Janitorial and Utilities Program used OTAs as a vehicle for reimbursing airport authorities for the costs of electricity to operate TSA screening equipment and for janitorial services in checkpoint areas. TSA had independently verified electricity prices set by the local power authority. Prices for janitorial services were verified based on the airport’s competitively- awarded janitorial contracts. In one case, TSA entered into an OTA to reimburse the Massachusetts Port Authority for $678,000 for one year. TSA performed price analysis on historical data from agreements dating back to 2008 and reviewed changes to the checkpoint square footage and changes in electrical consumption based on use of new TSA equipment. The airport authority provided documentation verifying electrical rates set by the local power authority that TSA’s contracting officer used to determine fair and reasonable pricing. Janitorial costs were based on TSA’s pro-rated share of the airport’s competitively-awarded janitorial contract and considered to be fair and reasonable based on adequate competition in the commercial market-place. TSA verified the rates each year prior to executing options. COR monitoring similarly varied depending on the complexity of the requirement. For the more complex design and construction projects under the Electronic Baggage Screening Program, COR monitoring was more rigorous than for programs with less complex requirements. According to 2016 guidance, the COR is the primary interface between TSA and the airport and is responsible for performing stakeholder coordination functions. During the design phase, the COR is to review the airport’s design documentation to ensure compliance with TSA’s guidelines and standards in collaboration with TSA subject matter experts. During the construction phase, the COR is responsible for performing ongoing oversight including reviewing invoices prior to payment. For an OTA awarded to the Miami Dade Aviation Department the COR reviews monthly milestone progress status reports as well as weekly status reports prepared by TSA’s site integration contractor highlighting work completed, ongoing activities, and program risks. A contracting official noted that schedule slippage is a big risk for cost reimbursement projects which is mitigated by COR oversight, as well as the ongoing oversight of the site leads. A contracting official also noted that most CORs for these OTAs have DHS certification for program and project management providing them with greater technical and administrative expertise to monitor more complex projects. In one instance on another project with complex requirements under the Advanced Surveillance Program, project monitoring resulted in TSA and the airport working together to contain costs when a project did not go as expected. In fiscal year 2012, TSA awarded an OTA for $7.2 million to the Port Authority of New York and New Jersey for the design, installation and maintenance of a security system, including closed-circuit television cameras and associated software, at John F. Kennedy International Airport. In fiscal year 2013, TSA modified the OTA to add more cameras, thereby increasing the cost of the project to $21 million. However, during installation, the Port Authority experienced several unforeseen issues with the project, including reduced work hours available for unionized labor and asbestos abatement costs. As a result, the Port Authority reassessed its original cost estimate and determined that it was not sustainable. In fiscal year 2017, TSA and the Port Authority agreed to decrease the scope of the project from 751 cameras to 389 cameras to stay within the original $21 million estimate. TSA Found Improved Compliance in Its Reviews of OTAs after Taking Action to Address Lapses in Oversight Starting in fiscal year 2015, four years after it issued its 2011 OTA policy, TSA began to include OTAs in its contract compliance review program. Compliance reviews are conducted quarterly based on a selection of contracts and OTAs awarded in the previous quarter and intended to improve contracting operations, ensure compliance with applicable standards and policies, and identify best practices. Based on the number of findings identified in its review of six OTA actions included in a 2015 quarterly review, TSA commissioned an OTA-specific compliance review in June 2015. The OTA-specific review covered 30 actions with a total value of about $82 million and identified significant gaps in documentation and reporting. For example, 18 of 27 OTAs awarded after TSA’s 2011 policy was issued did not include a determination and findings approving the action. As noted above, this is a key document that describes the rationale for using an OTA instead of a traditional contract and the determination of price reasonableness. The review also found that 18 of 30 files did not document the assignment of a COR to perform oversight and that 20 of 30 FPDS-NG records were incorrect. In response to the findings of the OTA-specific compliance review, TSA implemented a number of actions and has subsequently found improvement in OTAs meeting documentation and reporting requirements. We found that TSA revised the OTA policy to clarify requirements and increased training for contracting officers with OTA warrants. Specifically, to obtain the OTA warrant, contracting officers must complete webinar training and 3 days of classroom training. To maintain the warrant, contracting officers must retake the webinar training every two years. According to TSA contracting officials, all of the 56 contracting officers had completed the new training requirements as of May 2017. In addition, TSA has continued to include OTAs in its quarterly compliance review process. Based on our analysis of TSA’s fiscal year 2016 compliance reviews, we found that TSA reviewed 16 OTAs with a total value of $62 million. In those reviews, 12 of the 16 findings were determined to be low risk. For example, several of the files did not include documentation of COR certification. The remaining four OTAs had findings that were determined to be medium risk. This includes, for example, one case where the OTA period of performance started 5 months before the OTA was signed. None of the OTAs, however, was missing a determination and findings and three had missing or incorrect FPDS-NG entries. Officials noted that their efforts to increase training, oversight, and enforcement of OTA policies and procedures have resulted in increased awareness of reporting requirements and greater compliance. In addition, TSA also recently increased oversight of the COR program to support efficient OTA and contract oversight and administration. A TSA official responsible for the COR program reported that in fiscal year 2017, TSA began to conduct quarterly compliance reviews of the COR program to ensure greater consistency in oversight practices across the agency. According to COR compliance review guidance issued in 2016, the reviews are intended to highlight positive practices, effective management techniques, and identify areas of improvements. Our analysis of data in FPDS-NG showed that issues with incomplete data have been corrected over time, in part due to increased oversight. We compared data reported in TSA’s financial management and accounting systems with data reported in FPDS-NG and found that the percentage of new OTAs reported in FPDS-NG increased from 37 percent in 2012 to 95 percent in 2016. TSA’s policy requires that OTAs be reported in the OTA module within FPDS-NG. The awarding contracting officer has responsibility for accurately entering OTA information, including the value of the award and the period of performance. TSA contracting officials attributed gaps in data in part to the fact that the process for entering OTA data into FPDS-NG is manual, whereas FPDS- NG automatically pulls data for contracts from TSA’s contract writing system. According to officials, OTAs are excluded from the contract writing system due to system limitations and this additional step increases the chance that a contracting officer may forget to enter the data into FPDS-NG or enter it into the system incorrectly. TSA officials noted that they have taken steps to improve the accuracy of the data reported in FPDS-NG by reviewing and verifying entries on a monthly basis in accordance with TSA’s policy. Our review of 29 OTAs also demonstrated that the OTAs generally met the requirements for the key areas of TSA policy that we reviewed. For example, TSA’s policy states that if the OTA will be awarded without competition, the determination and findings must include a discussion of the method for selecting the OTA recipient. None of the OTAs we reviewed was competed because TSA determined that competition was not applicable due to the nature of the requirements. Nonetheless, all the determination and findings included a discussion of the method for selecting OTA recipients, a process that varied by program. For example, the Law Enforcement Officer Reimbursement Program posts a solicitation and selects eligible applicants based on review criteria. By contrast, the Advanced Surveillance Program prioritizes projects using a risk-based matrix that assesses threats, vulnerabilities, and consequences populated with data from 449 airports. Despite improvements, TSA officials acknowledged the need for continued vigilance based on several issues we identified. For example, TSA entered into a “no funding” OTA in 2013 with Signature Flight Support, a commercial fixed-base operator at Ronald Reagan Washington National Airport. A fixed-base operator is an organization granted the right by an airport to provide aeronautical services such as fueling, hangaring, tie-down and parking, aircraft rental, aircraft maintenance, flight instruction, and similar services. Under the agreement, Signature Flight Support collects and remits special security screening and threat assessment fees from airline operators on behalf of TSA, fees that are required due to the airport’s location within a flight restricted zone and special flight rules area. TSA does not obligate funds through the OTA, which primarily establishes the responsibilities and procedures for the fee collection and remittal. Our review found that TSA did not take any action to extend or renew the agreement after it expired in December 2014. However, TSA program officials told us that Signature Flight Support continued to provide the service although an agreement was not in place. When we brought this issue to TSA’s attention, officials agreed the OTA period of performance should have been extended each year. Officials told us that as of October 2017 they anticipate awarding a new OTA for this requirement in the second quarter of fiscal year 2018, more than three years after the OTA expired. In addition to the steps TSA has taken to improve OTA oversight, such as revising its OTA policy and increasing training requirements, TSA officials told us that they will continue to conduct quarterly compliance reviews and monthly data verification in accordance with their policy. Agency Comments We provided a draft of this report to the Department of Homeland Security for comment. The Department provided only technical comments, which we incorporated as appropriate. We are sending copies of this report to the Senate Committee on Homeland Security and Governmental Affairs and the Secretary of the Department of Homeland Security. The report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or woodsw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Summary of Key Areas for the Other Transaction Agreements GAO Reviewed Electronic Baggage Screening Program Purpose: Reimburses airports for the allowable costs related to various airport checked baggage screening projects including the design and construction of checked baggage inline systems and the recapitalization of existing inline systems. Agreements generally range in value from $50,000 to $150 million, and the anticipated period of performance can range from 6 months to 3 years, depending on the size of airport and complexity of the project. TSA rationale for using Other Transaction Agreement (OTA): Airports are owned and operated either by city or county municipalities, airport boards or trusts, or, in some cases as not-for-profit entities. Given that the program requires modifications to airport terminals that are owned by an entity other than the federal government, it is more practical for the airport to oversee and monitor the construction or modifications required for their facilities. Method of selecting OTA recipient: Airports submit applications through the airport’s Federal Security Director—a TSA employee responsible for security operations at federalized airports—including a description of the requirement, schematic design, budgetary cost estimate, and data relating to number of bags processed and airlines served. TSA prioritizes applications using a risk-based model and by considering several factors such as the cost share the airport is willing to assume and the readiness of the airport to begin the project. OTA type: Partial cost share/reimbursement. Depending on the airport’s size, TSA will reimburse 90 or 95 percent of the allowable, allocable and reasonable cost of certain projects. In other types of projects, TSA provides 100 percent reimbursement—for example, for existing systems requiring the correction of security or safety deficiencies. Method of determining price reasonableness: TSA produces an independent government cost estimate based on design drawings and specifications received from the airport and approved by TSA. The estimate is developed using industry standards and is used for evaluating total project cost. When bids are received from the airport, TSA compares the bid amount with the estimate. TSA may conduct further analysis and discussion to ensure that the estimate correctly reflects the scope included in the bid documents. Contracting Officer’s Representative (COR) monitoring: The COR is the primary interface between TSA and the airport and is responsible for performing stakeholder coordination functions. During the design phase, the COR is to review the airport’s design documentation to ensure compliance with TSAs guidelines and standards in collaboration with TSA subject matter experts. During the construction phase, the COR is to monitor project schedule and scope through processes such as weekly and monthly reporting. Purpose: Provides partial reimbursement to approximately 325 airports to offset the allowable costs of carrying out aviation law enforcement responsibilities in support of passenger screening activities. TSA rationale for using OTA: Participants are not traditional contracting partners; most participants must contribute to the cost of providing law enforcement officer support at the checkpoints; and the agreements do not acquire property or services for the direct benefit or use of the government. Method of selecting OTA recipient: The program posts a solicitation to FedBizOpps.gov with eligibility requirements, application process, review criteria, and selection process. Airports as well as state, local, or other public institutions/organizations responsible for commercial airport operations that have incurred law enforcement service costs due to TSA security mandates are eligible. The Federal Security Director—a TSA employee responsible for security operations at federalized airports— along with the Law Enforcement Officer Program Office, Office of Chief Counsel, and the contracting officer, participate in selecting eligible applicants. OTA type: Partial cost reimbursement. Method of determining price reasonableness: OTAs are negotiated to provide reimbursement for law enforcement officer support at an established “not-to-exceed” hourly rate or the actual cost per hour, whichever is lower. The amount of partial reimbursement is based on airport category, the number of checkpoints, hours of operation, and availability of funds. COR monitoring: CORs provide technical direction and day-to-day oversight of the program, work with the airport Federal Security Director to make sure that requirements are being satisfied, and approve invoices prior to payment. Purpose: Provides reimbursement for the allowable costs incurred to design, install, or expand surveillance systems to meet the required views of the local TSA. Project costs generally range from $200,000 to $21 million with an anticipated period of performance ranging from 6 months to 3 years depending on the complexity of the system and facility size. TSA rationale for using OTA: The primary beneficiary of the surveillance equipment is the facility that will take ownership of the system and be solely responsible for its operation. The use of an OTA provides for the facility to manage and perform the work but allows TSA oversight and control over the expenditure of TSA funds. TSA will not benefit directly from the purchase, installation, and operation of the system, so a traditional contract would not be appropriate. Method of selecting OTA recipient: The program prioritizes projects based on a risk-based matrix that assesses threats, vulnerabilities, and consequences based on data from 449 airports. Airports must be willing to complete the project within the required timeframe. OTA type: Cost reimbursement. Method of determining price reasonableness: The program uses a pre-award systems engineering process which culminates in a project evaluation and plan, a comprehensive surveillance assessment of TSA managed areas, and an independent government cost estimate. TSA reviews the cost elements to, for example, validate labor categories, labor hours, materials, and other direct costs based on industry standards and comparison with other projects. The program also uses market research and historical data to inform price analysis. COR monitoring: The COR works with project coordinators to monitor OTA performance and maintains direct contact with the transportation facility and the local TSA representatives. The COR reviews invoices to ensure that the transportation facility (via its contractor) has met all acceptance criteria prior to approval and payment of each invoice. Upon completion of installation and testing, TSA obtains an acceptance report to be signed by the transportation facility authority and major stakeholders including facility representatives, and the responsible TSA Federal Security Director, contracting officer, and COR. Purpose: Provides partial reimbursement to airports, mass transit systems, and state and local law enforcement participants for the allowable costs incurred associated with the operation of the authorized canine teams and explosives storage magazines. Allowable costs that will be reimbursed include handlers' salaries and care for the canines. In turn, the local jurisdiction agrees to a set of responsibilities including using TSA trained canine teams at least 80 percent of their on-duty time in the transportation environment and to maintain a minimum of three certified teams available for around-the-clock incident response. The program reimburses participants up to $50,500 per canine team for allowable costs incurred. The period of performance for these OTAs is up to 5 years. TSA rationale for using OTA: A standard procurement contract is not suitable because the airports, mass transit, and maritime facilities are not owned by TSA, but by airport authorities, and state and local agencies. These entities have the responsibility for the control and oversight of security operations at a specific location, either by having their own law enforcement officers, or using the state or local law enforcement officers. Since TSA does not own the airport or have primary law enforcement responsibility and only provides participants partial reimbursement for the operating costs of the teams, an OTA is warranted. Method of selecting OTA recipient: Transportation authorities and/or local law enforcement entities submit a written request outlining their desire to join the program in which they outline the need for the canine teams within their respective transportation system/s. TSA selects recipients based on a review of the transportation system’s risk profile and the program’s available team openings. OTA type: Partial cost reimbursement. Method of determining price reasonableness: The $50,500 per team stipend only covers a portion of the cost to the participant. There are instances after award that require an additional price reasonableness determination, such as when a participant requests reimbursement for a supply or service that is either unknown to the program or inconsistent with program historical prices for the given supply/service. If the program determines that the item is allocable the program will determine whether it was procured competitively and any facts that may support it being higher than historical prices paid. If the item was not procured competitively, the program will look at current price lists and catalogs for a same or similar item and consult program subject matter experts on their personal knowledge of the item(s) being purchased. COR monitoring: The program assigns a Field Canine Coordinator who is responsible for overseeing the participant’s compliance with the agreement through periodic reporting and assessments. Reimbursement is to be made upon receipt and review of summited expenses by the COR and contracting officer. Purpose: The Checkpoint Janitorial and Utilities program uses OTAs to define the terms and conditions for TSA’s use of checkpoint space in mandated non-leased space at airports and to provide a vehicle for reimbursing the cost of electrical consumption and janitorial services. TSA rationale for using OTA: A procurement contract is not suitable since the airport is a governmental entity, not a commercial vendor. Additionally, airports often contract directly with a utility provider or janitorial company. Method of selecting OTA recipient: Airports request reimbursement for utility costs and janitorial services in mandated non-leased space at TSA security checkpoints. TSA Federal Security Directors who are responsible for security operations at federalized airports confirm the need for reimbursing the cost of utilities and janitorial services at the checkpoint space. These OTAs are not available for competition as the only available source is the airport authority. OTA type: Cost reimbursement. Method of determining price reasonableness: TSA reimburses airports at cost for the costs of electrical consumption by TSA screening equipment located in the checkpoint space based on a cost allocation methodology. TSA reimburses airports for its pro-rata share of the airports janitorial costs per square foot also based on a cost allocation methodology. In the files we reviewed, prices were considered to be fair and reasonable based on documentation verifying the rates set by the local power authority. Costs were considered to be fair and reasonable based on the airports’ competitively-awarded janitorial contracts and rates established by the local utility authority. COR monitoring: Provides technical direction, contractor oversight, and certification of payments. Purpose: The office has an ongoing requirement for intelligence gathering, public transit information sharing and analysis, and development of mass transit and passenger rail recommended security practices. TSA rationale for using OTA: The American Public Transportation Association is a not-for-profit trade association which therefore may not currently have the experience, knowledge, or past performance to support a FAR type contract. Method of selecting OTA recipient: Through market research, TSA determined that the American Public Transportation Association was uniquely capable of meeting requirements. OTA type: Fixed price. Method of determining price reasonableness: In 2014, price was determined to be fair and reasonable based primarily on historical data and prices consistent with the preceding interagency agreement and the office’s independent government cost estimate. In 2016, the program updated the independent government cost estimate based on a quote from the American Public Transportation Association which provided for greater clarity, insight, and definition to the actual costs. Additional market research is planned to determine the best way to fulfill this requirement in the future. COR monitoring: The COR developed a contract management plan which identifies a detailed list of work products and delivery schedule. The expected deliverables are also detailed in the OTA statement of work. Responsibilities of the contractor include developing and managing a project plan; updating the plan as the project evolves; reporting project progress and status via monthly reports; and, participating in TSA- scheduled conference calls, if necessary, to review project progress, identify and discuss issues, and discuss corrective action. Purpose: The Surface Division of the Office of Security Policy and Industry Engagement has a need to maintain railroad police personnel involvement and a liaison relationship with the FBI’s National Joint Terrorism Task Force. The requirement entails the direct employment of intelligence gathering focused on preventing terrorist acts affecting the nation’s passenger and freight-rail infrastructure to facilitate the continuity of communications, liaison, intelligence analysis and information sharing among federal, state, local and railroad industry police/security agencies. TSA rationale for using OTA: A procurement contract is not suitable for this requirement, as the purpose of the action is to not acquire property or services for the direct benefit or use of the United States government. Rather, the requirement entails the direct employment of intelligence gathering focused on preventing terrorist acts affecting the nation’s passenger and freight-rail infrastructure. Method of selecting OTA recipient: Since 2003, the Association of American Railroads has provided the TSA with a railroad police officer charged with collecting and analyzing intelligence information. Market research reveals the Association of American Railroads to be one of two major railway representation groups in the U.S. counting among its membership the seven largest freight and passenger rail carriers in North America. A follow-on agreement with the Association of American Railroads maintains an uninterrupted flow of the critical intelligence necessary in monitoring the safety and security of the nation’s railway infrastructure. OTA type: Fixed price. Method of determining price reasonableness: The program developed an independent government cost estimate based on prices paid under a previous agreement which allows for an inflationary cost adjustment of 3 percent per year and determined the annual funding cost to be fair and reasonable in meeting this requirement. COR monitoring: The COR is responsible for the technical administration and liaison of the agreement and is to review and certify invoices for completeness and accuracy before approving them for payment. As authorized by the FBI, the assigned railroad police officer is to provide a monthly written report that summarizes the activities and accomplishments related to the tasks outlined in the agreement. Purpose: Ronald Reagan Washington National Airport is located within the Flight Restricted Zone and Special Flight Rules Area. As such, the Office of Security Policy and Industry Engagement developed a security program for approved general aviation aircraft operators which requires stringent security measures including requirements for background checks, physical screening of passengers and baggage. Aircraft operators are responsible for reimbursing TSA for the cost of the security screening. TSA requires the use of the airport facility to perform the screening function and a mechanism for the collection of security screening and threat assessment fees from aircraft operators and remittance of those fees to TSA. TSA rationale for using OTA: A procurement contract is not suitable for this requirement because TSA is not acquiring, purchasing, or leasing any product or service. The OTA primarily establishes the responsibilities of the parties and the fee collection and remittal procedures. Method of selecting OTA recipient: TSA determined that Signature Flight Support, as the sole commercial fixed base operator granted the right to operate at Reagan National Airport to provide aeronautical services such as fueling, hangaring, parking, aircraft rental, aircraft maintenance, flight instruction, and similar services—is therefore the only entity capable of providing the facilities and services required to implement this program. OTA type: No funding. Method of determining price reasonableness: Not applicable. COR monitoring: The COR is responsible for providing technical direction and administration. Purpose: The Office of Global Strategies is directed to encourage the development of civil aviation security, and is authorized to furnish to international organizations certain technical expertise and assistance. The office awarded an OTA to the International Civil Aviation Organization—a specialized agency of the United Nations committed to preventing and deterring unlawful interference with international civil aviation—to cover the salaries and benefits for three TSA employees assigned to the organization as senior security advisors. TSA actively participates in the organization’s Aviation Security Panel of Experts, which is responsible for promulgating international security standards. TSA rationale for using OTA: An OTA is best suited for this requirement since the International Civil Aviation Organization is a United Nations specialized agency and TSA is not acquiring any property or services for the direct benefit or use of the United States government. Method of selecting OTA recipient: There are no known alternative sources. OTA type: Fixed price. Method of determining a fair and reasonable price: Both the Program Office and the Contracting Officer solely relied upon historical salaries as previously used with the International Civil Aviation Organization. COR monitoring: The COR reviews and the contracting officer approves all invoices prior to payment. Purpose: TSA has a requirement to obtain parking spaces/permits for Federal Air Marshals during their mission flights for various airports. TSA rationale for using OTA: A procurement contract is not suitable for this requirement as airport parking is not considered a commercial item/service to the public; it is only available to business partners. An OTA allows TSA to participate in an airport’s business partner category. Further OTAs provide a practical vehicle because the airport authority is considered a U.S. state government entity. Method of selecting OTA recipient: TSA conducted market research which found that an OTA with the airport provides a significant cost savings to the government compared with other alternatives. TSA compared the costs of parking as a business partner with the cost of parking at the typical rates at the airport. OTA type: Fixed price. Method of determining a fair and reasonable price: TSA prepared an independent government cost estimate based upon commercial market pricing for airport parking. COR monitoring: TSA will pay the airport the variable fixed rate on a monthly basis. All costs will be invoiced based on actual costs incurred, but not to exceed the OTA amount. To receive payment from TSA, the airport submits one-page invoice to include the quantity used, unit price, and extended prices of the monthly deliverable. The invoice will be reviewed and approved by the COR and contracting officer prior to payment. Purpose: TSA has a need for parking for authorized Office of Law Enforcement Employees at Washington Dulles International Airport. TSA rationale for using OTA: Need for parking can be met more economically with mechanism to directly reimburse Metropolitan Washington Airports Authority. Method of selecting OTA recipient: TSA conducted market research which found that an OTA with the Metropolitan Washington Area Airport authority provides a significant cost savings to the government compared with other alternatives. OTA type: Fixed price. Method of determining a fair and reasonable price: TSA conducted price analysis and found that other available lots are all more expensive, farther away from the airport, and lack the capacity to service 400 people. COR monitoring: Perform surveillance to assure performance and compliance with the terms and conditions of the agreement. Certify invoices to the contracting officer for payment. Appendix II: GAO Contact and Staff Acknowledgments GAO Contact: Staff Acknowledgments In addition to the contact named above, Tatiana Winger (Assistant Director), Angie Nichols-Friedman (Analyst in Charge), Peter Anderson, Lorraine Ettaro, Julia Kennon, Carol Petersen, Lindsay Taylor, Westley Tsou, Alyssa Weir, and Robin Wilson made key contributions to this report.
Why GAO Did This Study TSA is responsible for securing the nation's transportation systems and uses security technologies to screen airline passengers and their luggage to prevent prohibited items from being carried on commercial aircraft. TSA has special authority for using OTAs, which are not subject to certain federal contract laws and requirements. OTAs provide flexibility to help meet mission needs, but potentially carry the risk of reduced accountability and transparency. GAO was asked to examine TSA's use of OTAs. This report addresses: (1) the extent and purposes of TSA's use of OTAs, and (2) how TSA ensures prices are reasonable and how it oversees OTAs. To address TSA's use of OTAs, GAO analyzed data on OTA awards and obligations from the Federal Procurement Data System-Next Generation from fiscal years 2012 to 2016 (the most recent years for which data were available). GAO determined that data were sufficiently reliable to report on TSA's minimum use of OTAs. To examine how TSA prices and oversees OTAs, GAO selected a nongeneralizable sample of 29 OTAs from the 8 TSA programs that awarded them based on program size and OTA value. GAO reviewed relevant documentation, and interviewed contracting and program officials. What GAO Found During fiscal years 2012 through 2016, the Transportation Security Administration (TSA) awarded at least 1,039 other transaction agreements (OTA) and obligated at least $1.4 billion on them. These agreements, which are neither traditional contracts nor grants, were primarily used to reimburse airports and law enforcement agencies for the costs associated with TSA security programs. For example, TSA awarded at least 109 OTAs and obligated at least $783 million from fiscal years 2012 through 2016 to reimburse airports for the allowable design and construction costs associated with installing, updating, or replacing checked baggage screening systems. TSA also used OTAs for intelligence analysis and to offset the costs of providing canines for explosives detection, among other things. TSA Used Other Transaction Agreements to Reimburse Airports for Design and Construction Costs Associated with Checked Baggage Screening Systems For the selected 29 OTAs GAO reviewed, GAO found that the methods TSA used to determine price reasonableness varied depending on the complexity of the requirement. For example, For complex design and construction projects, TSA compared independent government cost estimates with contractor bids. Certified program managers monitored project schedule and scope through site visits and status reports. In contrast, TSA independently verified the rates set by the local power authority when reimbursing some airports for electricity costs to operate TSA screening equipment. GAO also found that TSA has taken action to address prior lapses in oversight, resulting in improved compliance. In 2015, TSA identified significant gaps in OTA file documentation and data reported in the Federal Procurement Data System-Next Generation. TSA took action to address these deficiencies by (1) updating its policy, (2) requiring additional training for contracting officers, (3) instituting monthly data verification, and (4) monitoring compliance through quarterly reviews. GAO's analysis confirmed that the quality of the data had improved between fiscal year 2012 and 2016. Moreover, the 29 OTAs generally met key requirements of TSA's policy that GAO identified. What GAO Recommends GAO is not making any recommendations in this report.
gao_GAO-18-236
gao_GAO-18-236_0
Background TSA Processes for Allocating TSOs across Airports At TSA headquarters, the Office of Security Operations (OSO) has primary responsibility for operation of the RAP and allocation of TSOs across airports. Within OSO, the Staffing and Scheduling Division oversees the RAP. To allocate staff to the nearly 440 TSA-regulated airports in the United States, OSO is to use a combination of computer- based modeling and line-item adjustments based on airport-specific information. First, the agency is to work with a contractor to evaluate the assumptions—such as rates of expedited screening—used by the computer-based staffing allocation model (model) to determine the optimal number of TSOs at each airport based on airport size and configuration, flight schedules, and the time it takes to perform checkpoint and baggage screening tasks. Second, after the model has determined how many TSOs are required for each airport, headquarters-level staff are to make line item adjustments to account for factors such as differences in staff availability and training needs that affect each airport. Figure 1 below provides additional details regarding TSA’s process to determine the number of TSOs at airports. TSA’s Process for Evaluating Information Used in the RAP As previously discussed, in 2007, we recommended that TSA establish a mechanism to periodically assess the assumptions in the RAP (prior to fiscal year 2017, known as the Staffing Allocation Model) to ensure that staffing allocations accurately reflect operating conditions that may change over time. TSA implemented this recommendation by developing an evaluation plan for regularly assessing the assumptions used in the staffing model. Assumptions include the number of passengers or bags that can be screened each hour by TSA equipment and the time TSOs require to operate discrete sections of the screening process, such as conducting pat-downs or searches of passengers’ carry-on baggage. The evaluation plan states that TSA is to assess (1) the time it takes to screen passengers using TSA equipment and (2) the number of staff needed to operate the equipment. Results from these assessments are to inform the assumptions used in the model to determine the base allocation of TSOs to U.S. airports. TSA uses the evaluation plan as well as airport-level characteristics to systematically evaluate the assumptions used in the model on a regular basis: Evaluation plan: TSA’s evaluation plan recommends evaluating the time it takes to perform 19 aspects of passenger and checked baggage screening processes at least every two years and includes detailed procedures for doing so. For instance, the evaluation of passenger screening processes involves observing operations at selected airports to determine the average time it takes for one passenger to remove items of clothing and prepare his or her belongings for screening. Similarly, the evaluation determines how many passengers can be processed each hour during selected aspects of screening, such as by travel document checkers or via advanced imaging technology (AIT), often referred to as body scanners. Individual airport characteristics: Each year, TSA airport-level staff, such as FSDs or their designees, are to review the information in the model to ensure that information on the number of checkpoints and each checkpoint configuration and the number of flights departing the airport each day is accurate. TSA Processes for Conducting Passenger and Checked Baggage Screening, and Collecting Wait Time Data at Airports At the airport level, FSDs and their designees are responsible for overseeing TSA security activities, including passenger and checked baggage screening. TSOs at airports follow standard operating procedures that guide screening processes and utilize technology such as AITs or walk through metal detectors (WTMD) to screen passengers and their accessible property. TSOs also inspect checked baggage to deter, detect, and prevent the carriage of any unauthorized explosive, incendiary, or weapon onboard an aircraft. Checked baggage screening is conducted in accordance with standard operating procedures and generally is accomplished through the use of explosives detection systems or explosives trace detection systems. TSA employs an expedited screening program, known as TSA Pre® that assesses passenger risk to aviation security prior to their arrival at an airport checkpoint. According to TSA, expedited screening involves a relatively more efficient and convenient screening process for individuals from whom TSA has obtained sufficient information to determine them to be of lower risk and thus undergo an expedited screening process, compared to the standard screening process a traveler may undergo, for whom TSA does not have such information in advance. Finally, at each airport, TSA is to collect throughput data on the number of passengers screened under both expedited and standard screening and monitor passenger wait times at screening checkpoints. TSA airport officials are to submit passenger throughput and wait time data on a daily basis to OSO’s Performance Management Division at TSA headquarters, which compiles the data through the Performance Measurement Information System (PMIS), TSA’s web-based data collection system. TSA Offices Responsible for Sharing Information with Stakeholders about Airport Operations TSA’s OSO and the Office of Security Policy and Industry Engagement (OSPIE) are both responsible for sharing information with stakeholders about airport operations. In response to the Aviation Security Act, OSO issued guidance in October 2016 intended to ensure that FSDs share information with stakeholders. OSPIE communicates TSA information about airport operations, such as how TSOs are allocated across airports, to stakeholders. TSA Modifies Its Staffing Assumptions and Relies on Airport Information to Tailor TSO Staffing Levels to Individual Airports TSA Modifies Its Staffing Assumptions as Needed Based on Contractor and TSA Officials’ Evaluations and Passenger Throughput Forecasts In fiscal years 2016 and 2017, TSA modified the assumptions used in its model, as needed, to reflect changes identified through annual evaluations performed by a contractor. The contractor is specifically tasked with evaluating the assumptions related to the time needed to screen passengers and their baggage. For example, TSA officials stated that they increased the expected time needed to screen passengers for one type of passenger screening equipment in fiscal year 2017 because the contractor found that the actual time needed was more than the assumption TSA used in fiscal year 2016. Similarly, in fiscal year 2016, TSA allocated fewer staff to review images of checked baggage, compared to previous years, because the contractor’s evaluation determined it took TSOs less time to review the images than the time observed in previous years. In addition to modifying its model based on evaluations performed by contractors, TSA officials at the headquarters level review and modify other assumptions in the model to ensure they are accurate. For example, prompted by the long waits in the spring of 2016, officials stated that they modified the model for the 2017 fiscal year based on their evaluation of the 2016 assumptions. Specifically, TSA assumed that 50 percent of airline passengers would use expedited screening in 2016, but only an average of 27 percent of passengers used expedited screening that year. According to the officials, TSA modified this assumption in fiscal year 2017 and now uses TSA Pre® Program data specific to each individual airport in the model. Similarly, officials told us that, since TSA was established in November 2001, many employees will reach 15 years of service with the federal government in fiscal years 2016 and 2017, resulting in increased annual leave allowances. In response, officials have increased the amount of annual leave they expect employees to use and rely on airport-specific data regarding employee tenure to estimate annual leave for the coming year. TSA has also modified the way it develops assumptions regarding passenger throughput at each airport. For example, beginning in fiscal year 2016, TSA used passenger throughput forecasts to allocate staff commensurate with the expected rate of increase in passenger throughput at each airport. The estimated increase in passenger throughput for each fiscal year is based primarily on national and airport- level data from the previous 3 months from PMIS, TSA’s web-based data collection system, and flight forecast data from the airline industry, as well as additional input from other sources. Prior to fiscal year 2016, TSA planned for passenger throughput during the busiest 28 days from the previous fiscal year and did not adjust the assumption for the annual increase in passenger throughput, which increased two percent in 2014 and four percent in 2015. A TSA headquarters official responsible for overseeing the RAP stated that the agency compared projected passenger throughput to actual passenger throughput for fiscal year 2017 to determine the accuracy of the projections and concluded that no significant changes to the method of forecasting were necessary for fiscal year 2018. TSA Uses Airport-Level Information to Tailor Staffing Levels to Individual Airport Needs Using Line Item Adjustments According to TSA officials, each airport in the United States has unique characteristics that make it difficult to apply a one-size-fits-all solution to staffing security operations. For instance, officials told us that some airports are allocated additional staff to account for the time needed to transport TSOs to off-site training facilities. Because the staffing allocation resulting from TSA’s model does not reflect the full range of operating conditions at individual airports, TSA headquarters officials use airport-specific information to further adjust allocations by changing individual line items within the allocation after running the model on both an annual and an ad hoc basis. TSA headquarters officials stated that they have developed methodologies for making standard line item adjustments such as training requirements, overtime, and annual and sick leave. Officials told us they review the methodologies each year and use their professional judgement to modify the methodologies to account for changes in airport needs as well as budget constraints. We found that through its process of tailoring staffing allocations to individual airports’ needs, TSA is able to respond to the circumstances at each individual airport. TSA headquarters officials also use airport-specific data on staff availability, training needs, supervisory needs, and additional security layers to manually adjust the model’s staffing allocation output at a line item level. For instance, headquarters officials use the previous years’ data on staff sick leave for each airport to evaluate whether they are allocating the appropriate amount of sick leave to their staff allocations on an individual airport basis. According to TSA headquarters officials, sick leave use can vary by airport and region of the country. Similarly, officials stated that they adjust the model’s output to account for individual airport staff’s training needs so that each airport’s staff can meet TSA’s annual training requirements. In addition, according to TSA officials at both the headquarters and airport levels, airport-level officials can request exceptions—modifications to their staffing allocation—based on unusual airport conditions that are difficult to address, such as problematic checkpoint configurations or lack of space for security operations. For instance, officials at one airport said that they had been granted exceptions for one checkpoint because pillars and curves within the checkpoint prevented the lanes in the checkpoint from screening passengers at the rate assumed by the model. TSA officials at the headquarters level review requests for exceptions and use their professional judgement to determine whether the exception will be granted. Finally, in some cases, TSA may adjust an airport’s staffing allocation outside of the annual staffing allocation process and may do so as the result of significant and unforeseen changes in airport operations. For instance, TSA officials stated that one airport was allocated additional staff for the remainder of the fiscal year when the airport opened a new terminal mid-year so that the additional checkpoints could be properly staffed. Officials at another airport we visited said that they had been allocated additional staff when an airline extended its operational hours to ensure appropriate staffing for the additional hours of operation. TSA Uses Data to Monitor Airport Operations and Respond to Increases in Passenger Wait Times and Throughput TSA Uses Passenger Wait Time and Throughput Data to Monitor Airport Operations on a Daily Basis TSA collects passenger wait time and throughput data and uses those data to monitor daily operations at airports. TSA’s Operations Directive (directive), Reporting Customer Throughput and Wait Times, provides instructions for collecting and reporting wait time and passenger throughput data for TSA screening lanes. Regarding wait time data, according to the directive, FSDs or their designees at all Category X, I, and II airports must measure wait times every operational hour in all TSA expedited and standard screening lanes. The directive requires wait times to be measured in actual time, using a verifiable system such as wait time cards, closed circuit television monitoring, or another confirmable method. The directive indicates that wait times should be measured from the end of the line in which passengers are waiting to the WTMD or AIT units. FSDs or their designees at Category III and IV airports may estimate wait times initially, but the directive requires them to measure actual wait times when wait times are estimated at 10 minutes or greater. The directive also requires FSDs or their designees to collect passenger throughput data directly from the WTMD and AIT units. According to TSA headquarters officials, the machines have sensors that collect the number of passengers that pass through each hour, and TSOs retrieve the data directly from the units. All airports regardless of category are required to enter their wait time and throughput data daily into PMIS, TSA’s web-based data entry program, no later than 3:30 AM Eastern Time of the next calendar day so that the data can be included in the morning’s Daily Leadership Report (discussed in more detail below). To monitor operations for all airports, TSA compiles a daily report utilizing a variety of PMIS data points, including wait time and throughput data. The Office of Security Operations’ Performance Management Division disseminates the Daily Leadership Report to TSA officials, including regional directors and FSDs and their designees every morning detailing the previous day’s wait times and throughput figures, among other data points. The Performance Management Division includes a quality assurance addendum with each Daily Leadership Report, indicating missing or incorrect data, to include wait time and throughput data, and TSA has procedures in place intended to ensure officials at the airports correct the data in PMIS within 2 weeks. In addition to the Daily Leadership Report, TSA utilizes wait time and throughput data to monitor airport operations at 28 airports in near real time. In May 2016, TSA established the Airport Operations Center (AOC) that conducts near real time monitoring of the operations of 28 airports that, according to TSA headquarters officials, represent the majority of passenger throughput nationwide or are operationally significant. TSA requires the 28 airports monitored by the AOC to enter passenger wait time data and throughput data into PMIS hourly (whereas the remaining airports are only required to submit data once daily, by 3:30 AM Eastern Time, as described above) so that AOC officials can monitor the operations in near real time. In addition, TSA officials at airports are required to report to the AOC when an event occurs—such as equipment malfunctions, weather-related events, or unusually high passenger throughput—that affects airport screening operations and results in wait times that are greater than TSA’s standards of 30 minutes in standard screening lanes or greater than 15 minutes in expedited screening lanes. If an airport is undergoing a period of prolonged wait times, the AOC coordinates with the Regional Director and the FSD to assist in deploying resources. For example, over the course of the summer of 2016, after certain airports experienced long wait times in the spring of 2016 as confirmed by our analysis, the AOC assisted in deploying additional passenger screening canines and TSOs to those airports that experienced longer wait times. The AOC disseminates a morning and evening situational report to TSA airport-level officials and airport stakeholders summarizing nationwide wait times, highlighting wait times at the top airports and any hot spots (unexpected passenger volume or other operational challenges) that may have occurred since the most recent report was issued. In addition to the near real time monitoring of the 28 airports, the AOC also monitors operations at all other airports and disseminates information to airports and stakeholders as needed. To determine the extent to which TSA exceeded its wait time standards, we analyzed wait time data for the 28 airports monitored by the AOC for the period of January 2015 through May 2017 for both standard and expedited screening. Our analysis shows that TSA met its wait time standard of less than 30 minutes in standard screening at the 28 AOC airports 99.3 percent of the time for the period of January 2015 through May 2017. For expedited screening for the same time period at the same airports, we found that 100 percent of the time passengers were reported to have waited 19 minutes or less. Additionally, our analysis confirmed that the percentage of passengers in standard screening waiting over 30 minutes increased in 2016 during the months of March, April, and May as compared to 2015 at all 28 airports monitored by the AOC. TSA Airport Officials Use a Variety of Tools to Respond to Increases in Passenger Wait Times and Throughput FSDs and their staff at the airports we visited identified a variety of tools that they utilize to respond to increases in passenger wait times and/or throughput. TSOs from the National Deployment Force (NDF)—teams of additional TSOs—are available for deployment to airports to support screening operations during major events and seasonal increases in passengers. For example, TSA officials at one airport we visited received NDF officers during busy holiday seasons and officials at another airport received officers during the increase in wait times in the spring and summer of 2016. TSA officials at select airports use passenger screening canines to expedite the screening process and support screening operations during increased passenger throughput and wait time periods. For example, TSA officials at one airport we visited emphasized the importance of passenger screening canines as a useful tool to minimize wait times and meet passenger screening demands at times when throughput is high. Officials at another airport we visited rely on these canines in busy terminals during peak periods. According to officials at two of the airports we visited, the use of passenger screening canines helped them to reduce wait times due to increased passenger volumes in the spring and summer of 2016. TSA officials at airports also utilize part-time TSOs and overtime hours to accommodate increases in passenger throughput and wait times. For example, according to officials at all eight of the airports we visited, they use overtime during peak travel times, such as during holiday travel seasons, and officials usually plan the use of overtime in advance. Additionally, TSA officials at four of the airports we visited told us they use part-time TSOs to help manage peak throughput times throughout the day. According to TSA officials at two of the airports we visited, they move TSOs between checkpoints to accommodate increases in passenger throughput at certain checkpoints and to expedite screening operations. For example, TSA officials at one airport we visited have a team of TSOs that terminal managers can request on short notice. Officials at the other airport estimated that they move TSOs between terminals about 40 times per day. TSA Has Taken Steps to Improve Information Sharing with Stakeholders and Most Stakeholders We Interviewed Reported Improved Satisfaction TSA Improved Information Sharing with Stakeholders through Daily Conference Calls, Presentations, and Meetings TSA headquarters has taken steps intended to improve information sharing with stakeholders about staffing and related screening procedures at airports. For example, TSA officials hold daily conference calls with industry association, airline, and airport officials at the 28 airports monitored by the AOC. According to TSA headquarters officials, TSA established the daily conference call as a mechanism intended to ensure timely communication with stakeholders and to help identify and address challenges in airport operations such as increases in passenger wait times. Also, TSA headquarters officials stated that they conducted a series of presentations and meetings with industry, airline, and airport officials to discuss TSA’s RAP, security enhancements at airports, and airport screening processes, among other things. For example, TSA’s headquarters officials shared information about the fiscal year 2017 RAP in October 2016 during a briefing at an industry conference and a meeting with airline representatives, airline engineers, and Federal Aviation Administration officials. Additionally, TSA headquarters officials facilitated a stakeholder meeting in May 2017 to discuss planned improvements for the TSA Pre® Program and met with stakeholders in June 2017 to discuss security enhancements and changes to screening procedures for carry-on baggage. In addition to headquarters-level initiatives, at the eight airports we visited, we found that FSDs shared information with airport and airline officials by meeting on an ongoing basis to discuss TSA staffing and related screening procedures. For example, according to the FSDs and airline and airport officials at all eight airports we visited, FSDs met with stakeholders on a daily, weekly, monthly, or quarterly basis. During these meetings, FSDs and airline and airport officials told us that FSDs discussed TSO staffing levels at the airports, instances when passenger screening wait times were long at security checkpoints, and TSA screening equipment performance, among other things. Stakeholders Reported Improved Satisfaction with TSA Headquarters Information Sharing Efforts and with Most FSDs Stakeholders told us that TSA headquarters officials and most FSDs improved information sharing since fiscal year 2016. With regard to TSA headquarters officials’ information sharing efforts, officials from all three industry associations we interviewed stated that, since fiscal year 2016, TSA headquarters improved information sharing with their association member companies and attributed that improvement, in part, to the daily conference call between TSA and stakeholders. For example, officials from one industry association stated that the calls benefited members by facilitating collaboration with TSA to more quickly identify and address problems, such as malfunctioning screening equipment, before the problems negatively affected passengers. An official from another industry association told us that the daily conference call improved communication substantially between TSA and the organization by providing a regular opportunity to discuss airport security issues and TSA’s plans to resolve those issues. Additionally, stakeholders we interviewed generally reported positive relationships or improved information sharing with FSDs, but also noted differences in the type and extent of information that FSDs shared. For example, officials at seven of eight airlines and all eight airports we visited stated that they have positive relationships with their FSDs and that their FSDs were accessible and available when needed, while the remaining airline official noted improving access to information. Furthermore, officials from all three industry associations cited improved information sharing between their members at airports and FSDs since fiscal year 2016, but officials from two association noted that some FSDs still do not regularly share information, such as changes in the number of TSOs staffed at individual airports. According to TSA headquarters officials, stakeholders can elevate any problems they experience with FSDs sharing information to regional directors who are responsible for ensuring that FSDs engage regularly with stakeholders. Agency Comments and Our Evaluation We provided a draft of this product to DHS for comment. We received technical comments which we incorporated as appropriate. We are sending copies of this report to the Secretary of Homeland Security, the Administrator of TSA and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-7141 or groverj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix I. Appendix I: GAO Contact and Staff Acknowledgments GAO Contact GAO Staff and Acknowlegements In addition to the contact named above, Ellen Wolfe, Assistant Director; Joel Aldape, David Alexander, Chuck Bausell, David Beardwood, Wendy Dye, Miriam Hill, Susan Hsu, Thomas Lombardi, Kevin Newak, Heidi Nielson, and Natalie Swabb made significant contributions to this report.
Why GAO Did This Study TSA employs about 43,000 TSOs who screen over 2 million passengers and their baggage each day at airports in the United States. TSA allocates TSOs to airports using both a computer-based staffing model and information from airports that are intended to provide each airport with the optimum number of TSOs. In the spring of 2016, long screening checkpoint lines at certain U.S. airports raised questions about TSA's process for allocating TSOs to airports. The Aviation Security Act of 2016 includes a provision for GAO to review TSA's process for allocating TSOs. This report examines how (1) TSA modifies staffing assumptions and tailors staffing levels to airports' needs, (2) TSA monitors wait times and throughput and adjusts resources accordingly, and (3) TSA shares information with stakeholders about staffing and related screening procedures at airports. GAO reviewed TSA documentation describing how the agency modifies staffing assumptions and manages stakeholder coordination. GAO also analyzed passenger wait time and throughput data from January 2015 through May 2017 for the 28 airports monitored by headquarters. GAO visited eight airports selected on the basis of passenger volume and other factors and interviewed TSA officials and stakeholders at those locations. GAO is not making any recommendations. What GAO Found The Transportation Security Administration (TSA) modifies staffing assumptions used in its computer-based staffing model (model) and tailors staffing levels to individual airport needs. Specifically, TSA works with a contractor annually to evaluate the assumptions used in the model and modifies the model's assumptions as needed. For example, TSA adjusted its model after contractor evaluations conducted in fiscal years 2016 and 2017 found that transportation security officers (TSO) needed more time to screen passengers and their baggage when using one type of screening equipment. Moreover, in 2016, TSA began using forecasts on the number of passengers screened at each airport's checkpoints (throughput) to better allocate staff commensurate with the expected rate of increase in passenger throughput at each airport. Furthermore, prompted by the long wait times at some airports in 2016, for the 2017 model TSA officials used actual expedited screening data, specific to each individual airport, rather than relying on the system-wide estimate used in 2016. TSA officials also use other information specific to each airport—such as staff training needs—to further tailor the TSO allocation because the initial allocation resulting from the model does not reflect the full range of operating conditions at individual airports. TSA uses data to monitor passenger wait times and throughput on a daily basis and responds to increases. For example, TSA's Airport Operations Center (AOC) monitors daily wait times and passenger throughput from 28 airports that TSA officials say represent the majority of passenger throughput nationwide or are operationally significant. Furthermore, TSA officials at airports are required to report to the AOC when an event occurs—such as equipment malfunctions—that affects airport screening operations and results in wait times that are greater than 30 minutes in standard screening lanes. GAO analyzed wait time data for the AOC-monitored airports for the period of January 2015 through May 2017 and found that TSA's reported wait times met its standard of less than 30 minutes in standard screening 99 percent of the time. Within that time frame, two airports accounted for the longest wait times in the spring of 2016. TSA officials identified several tools, such as passenger screening canines, that they use to respond to increases in passenger wait times at these airports. TSA has taken steps to improve information sharing with airline and airport officials (stakeholders) about staffing and related airport screening operations, and most stakeholders GAO interviewed reported improved satisfaction with information sharing. However, some stakeholders noted differences in the type and extent of information shared. According to TSA officials, stakeholders can elevate any problems they experience with information sharing within TSA to ensure information is shared regularly with stakeholders.
gao_GAO-18-100
gao_GAO-18-100_0
Background Medicare FFS Program In 2016, Medicare spent about $380 billion on health care services for beneficiaries enrolled in Medicare FFS, which consists of two separate parts: Medicare Part A, which primarily covers hospital services, and Medicare Part B, which primarily covers outpatient services. The majority of the 38 million Medicare FFS beneficiaries were enrolled in both Part A and Part B, although about 5 million were enrolled in Part A only and 0.3 million were enrolled in Part B only. Medicare FFS Cost- Sharing Design The general design of Medicare FFS cost-sharing has been largely unchanged since Medicare’s enactment in 1965. It includes separate deductibles for Part A and Part B services, a variety of per-service copayments and coinsurance after the deductibles are met, and no cap on beneficiaries’ cost-sharing responsibilities (see table 1). Supplemental Insurance among Medicare FFS Beneficiaries The current cost-sharing design leaves beneficiaries exposed to potentially catastrophic cost-sharing, and in part because of that, in 2015, 81 percent of Medicare FFS beneficiaries obtained supplemental insurance that covered some or all of their Medicare cost-sharing responsibilities, often in exchange for an additional premium (see table 2). For example, in 2015, 31 percent of Medicare FFS beneficiaries purchased a private Medigap plan, the most common types of which fully insulated them from Medicare cost-sharing responsibilities in exchange for an average annual premium of $2,400. Another 20 percent of Medicare FFS beneficiaries enrolled in Medicaid, which generally covered most of their Medicare cost-sharing responsibilities; however, these low- income beneficiaries generally only paid a limited or no premium for this supplemental coverage. Medicare FFS Cost- sharing Can Be Confusing and Lead to Overuse of Services; Modernizing Could Address Concerns, but Would Involve Trade-offs The current Medicare FFS cost-sharing design can be confusing, contribute to beneficiaries’ overuse of services, and leave beneficiaries exposed to catastrophic costs. Modernizing the design could address these concerns, but would involve trade-offs. For example, as shown in four illustrative designs that we evaluated, maintaining Medicare’s share of costs would involve a trade-off between the level of the cap and the deductible (or other cost-sharing). Medicare FFS Cost- sharing Design Can Be Confusing, Contribute to Beneficiaries’ Overuse of Services, and Leave Them Exposed to Catastrophic Costs As noted by Medicare advocacy groups and others, the current Medicare FFS cost-sharing design, which includes multiple deductibles, can be confusing for beneficiaries. In 2014, 16 percent of Medicare FFS beneficiaries were responsible for at least one Part A deductible for an episode of inpatient care as well as the annual Part B deductible. (Medicare FFS beneficiaries may be subject to more than one Part A deductible during the year, as the Part A deductible applies to each admission to an inpatient hospital or skilled nursing facility that occurs more than 60 consecutive days after the prior admission.) The Congressional Budget Office has cited the separate deductibles as one way in which Medicare FFS cost-sharing is more complicated than private plans. In 2016, according to a survey conducted by the Kaiser Family Foundation, only 1 percent of workers with employer-sponsored insurance had a separate deductible for inpatient services. Moreover, inpatient services tend to be nondiscretionary, and one or more deductibles for those services can create a financial burden for beneficiaries, while having minimal effect on their use of inpatient services. The cost-sharing design also affects beneficiaries’ utilization of services. For example, as noted by the bipartisan Simpson-Bowles Fiscal Commission, the lack of a coherent cost-sharing system is a significant contributor to overuse and misuse of care. This is particularly true for services such as home health and clinical laboratory services, which currently have no cost-sharing under Medicare FFS and thus do not provide beneficiaries an incentive to decline care of negligible value. Because of these concerns, MedPAC recommended adding a cost- sharing requirement for home health services that were not preceded by hospitalization or post-acute care, noting that the current lack of cost- sharing has likely contributed to the significant rise in utilization for these services, which suggests some overuse. At the same time, the lack of an annual cost-sharing cap prevents Medicare FFS from fulfilling a key purpose of health insurance: protecting beneficiaries from catastrophic medical expenses. While most beneficiaries had cost-sharing responsibilities under $2,000 in 2014, 1 percent—over 300,000 beneficiaries—had responsibilities over $15,000, including several hundred beneficiaries with responsibilities between $100,000 and $3 million. (See fig. 1.) Given the risk of catastrophic medical expenses, a focus group of current and future Medicare beneficiaries convened by MedPAC indicated that an annual cap is the cost-sharing design feature they were most interested in seeing added to the Medicare benefit. Annual caps are a common design feature of private plans, as most are required to have an annual cap, including those participating in MA. Specifically, since 2011, CMS has required most MA plans to have an annual cap of $6,700 or less and grants them additional flexibility in their cost-sharing design if they voluntarily set their cap at or below $3,400. The mandatory and voluntary caps for certain MA plans that provide both in- and out-of-network coverage are the same ($6,700 and $3,400) for in-network services, and 1.5 times higher ($10,000 and $5,100) for combined in- and out-of-network services. In addition to these implications of the cost-sharing design itself, the American Academy of Actuaries and others have noted that the complexity and the possibility of unlimited responsibilities increases demand for supplemental insurance, which can lead to added costs for beneficiaries and the Medicare program. It is uncommon for beneficiaries enrolled in private health insurance to have supplemental coverage. By insulating beneficiaries from some or all cost-sharing responsibilities (and not just catastrophic costs), supplemental insurance further reduces the incentives for beneficiaries to evaluate the need for discretionary care. In part because of these reduced incentives, we previously estimated that both beneficiaries’ average total out-of-pocket costs and average Medicare program spending were higher for Medicare FFS beneficiaries with Medigap than those with FFS only. Modernizing Medicare FFS Cost-sharing Could Address Concerns, but Would Involve Design Trade-offs Modernizing Medicare FFS cost-sharing could address these concerns, but would involve design trade-offs. Specifically, as proposed by various groups, revising Medicare’s cost-sharing design to include a single deductible, modified cost-sharing requirements, and an annual cost- sharing cap could address concerns with the current cost-sharing design. However, there are multiple options for revising within this broad framework, including two key design trade-offs that would affect the extent to which a modernized structure would address concerns about the current design (and possibly also raise new concerns). One trade-off centers on how to modify the existing complicated set of cost-sharing requirements for different services. While the reform proposals have generally suggested moving to a single deductible, they have varied in how to modify the subsequent per-service payments. Some proposals have emphasized the value of simplicity and suggested replacing the complex set of per-service payments above the deductible with a uniform coinsurance. A uniform coinsurance would simplify the cost-sharing design, provide beneficiaries insight into the total cost of each service, and introduce cost-sharing for certain potentially discretionary services, such as home health services. However, as noted by the Medicare Payment Advisory Commission and Congressional Budget Office, uniform coinsurance also has drawbacks, such as a fixed percentage of an unknown bill being harder for beneficiaries to understand and predict than copayments. Other proposals have emphasized the need to set cost-sharing based on the value of services, and have suggested moving Medicare toward a value-based insurance design in which per-service cost-sharing would vary based on the clinical value of the service to an individual beneficiary. While a value-based design would specifically target cost-sharing to promote prudent use of health care services, implementing it is challenging in practice and would be more complicated for beneficiaries to understand and for CMS to administer, though CMS is testing the feasibility of value-based insurance design in MA. A second design trade-off centers on how to set the level of the deductible and the annual cap. As shown in the four illustrative cost- sharing designs we evaluated, the lower the cap, the higher the deductible (or other cost-sharing requirements) would need to be to maintain Medicare’s and beneficiaries’ aggregate share of costs similar to that of the current design. For example, holding utilization and enrollment constant, we found that even without any deductible, a uniform coinsurance of 18 percent (a level below the existing 20 percent coinsurance for most Part B services) would be sufficient to add a cap near $10,000 (the mandatory cap for certain MA plans that allow beneficiaries to see any provider). In contrast, it would take a deductible near $1,225 (a level similar to the existing Part A deductible for each inpatient episode) and a uniform coinsurance of 20 percent to establish a cap of $3,400 (the voluntary cap for most MA plans). (See table 3.) Different levels of the deductible and cap would address certain concerns of the current design raised by GAO and others but also could create new ones. For example, as our analysis of four illustrative cost-sharing designs shows, designs with relatively high caps would provide some additional protection from catastrophic costs while maintaining a deductible and coinsurance near or below the current levels for Part B services. However, per an analysis conducted by Kaiser Family Foundation and the Urban Institute, half of Medicare beneficiaries in 2016 were living on less than $26,200 in income; thus, caps of $6,700 or higher may still leave some beneficiaries vulnerable to costs that are catastrophic for them and may not significantly decrease the associated demand for supplemental insurance. In contrast, designs with relatively low caps would provide greater protection from catastrophic costs. However, as noted by the Congressional Budget Office, beneficiaries who reached the cap would have less incentive to use services prudently. In addition, the higher deductible needed to offset a lower cap while maintaining Medicare’s share of costs could present a financial barrier for some beneficiaries to obtain necessary care. Direct Effect of Modernizing Medicare FFS Cost- sharing Design Would Depend on Specific Revisions and Time Horizon The direct effect of modernizing the Medicare FFS cost-sharing design (i.e., the effect when holding utilization and enrollment constant) on beneficiaries’ cost-sharing responsibilities would depend on the specific revisions and the time horizon examined. As we noted above, modernizing the FFS cost-sharing design while maintaining Medicare’s aggregate share of costs similar to the current design requires a trade-off between the level of the deductible and cap. At the beneficiary level, this design trade-off affects beneficiaries’ annual cost-sharing and the degree to which beneficiaries would be protected from catastrophic costs. One way of viewing how the design trade-off affects beneficiaries is to compare across different designs the median annual cost-sharing responsibility with the level of the cap (see fig. 2). In examining the direct effect of the four illustrative modernized designs we analyzed, we found the following: During year 1, cost-sharing designs that feature relatively low deductibles and relatively high caps would result in a median annual beneficiary cost-sharing responsibility close to or below that of the current design. In contrast, designs with relatively low caps—and therefore greater beneficiary protection from catastrophic costs— would result in a median annual beneficiary cost-sharing responsibility above that of the current design. For example, during year 1 of a design with no deductible, 18 percent coinsurance, and a cap near $10,000, we found that the median annual cost-sharing responsibility would be $479, which is below that of the current design ($621), despite the addition of a cap. In contrast, during year 1 of a design with a $1,225 deductible, 20 percent coinsurance, and a cap near $3,400, the median annual cost-sharing responsibility would be $1,486, which is 2.4 times higher than that of the current design. However, in exchange for this higher median annual cost-sharing responsibility, beneficiaries would have much greater protection from catastrophic costs, as their annual cost-sharing responsibilities would be capped near $3,400. By the end of 8 years, there would still be differences in the median annual beneficiary cost-sharing responsibility across different designs, but they would become less pronounced—despite the significantly different levels of catastrophic protection. As beneficiaries age and become more likely to have catastrophic costs in at least one year, the median annual cost-sharing responsibility would increase, regardless of the cost-sharing design. However, by the end of 8 years the differences in the median annual cost-sharing responsibility across different designs would become less pronounced. For example, the median annual cost-sharing responsibility under the design with a cap near $10,000 would increase from below that of the current design in year 1 to 1.1 times higher than the current design by the end of 8 years. In contrast, the median annual cost-sharing responsibility under the design with the cap near $3,400 would decrease from 2.4 times higher than the current design in year 1 to only 1.6 times higher by the end of 8 years. (See app. I table 4 for more details, including results on our other two illustrative designs and results over 4 years.) The same patterns held when looking at how the design trade-off affects beneficiaries in another way: the percentage of beneficiaries with cost- sharing responsibilities lower and higher than under the current design (see fig. 3). In examining the direct effect of our four illustrative designs, we found the following: During year 1, designs that feature relatively low deductibles and relatively high caps would result in a minority of beneficiaries having cost-sharing responsibilities that are at least $100 higher than under the current design. In contrast, designs with relatively high deductibles and relatively low caps would result in the majority of beneficiaries having cost-sharing responsibilities that are higher than under the current design. For example, during year 1 of a design with no deductible, 18 percent coinsurance, and a cap near $10,000, 16 percent of beneficiaries would have cost-sharing responsibilities at least $100 higher than their responsibilities under the current design. In contrast, during year 1 of a design with a $1,225 deductible, 20 percent coinsurance, and a cap near $3,400, 69 percent of beneficiaries would have cost-sharing responsibilities at least $100 higher than their responsibilities under the current design. By the end of 8 years, there would still be differences across the designs, but they would become less pronounced—despite levels of catastrophic protection that vary significantly. Over a longer time horizon, a larger percentage of beneficiaries would reach the cap at least once, regardless of the cost-sharing design (ranging from 23 percent reaching the cap at least once over 8 years under the design with a cap near $10,000 to 66 percent under the design with a cap near $3,400). However, the subset of these beneficiaries who nonetheless had annual cost-sharing responsibilities at least $100 higher would also increase. Whether this increase would be augmented or offset by the changes over time in the percentage of beneficiaries who never reached the cap and had higher cost-sharing responsibilities would depend on the specific design. For example, the percentage of beneficiaries with annual cost-sharing responsibilities at least $100 higher than the current design would increase from 16 percent in year 1 to 38 percent by year 8 under the design with a cap near $10,000. In contrast, this percentage would decrease from 69 percent in year 1 to 67 percent by year 8 under the design with a cap near $3,400. (See app. I tables 5 and 6 for more details, including results on our other two illustrative designs and results over 4 years.) Modernizing Medicare Cost- sharing Design Would Affect Costs Indirectly through Behavioral Responses Modernizing the Medicare FFS cost-sharing design would affect beneficiaries’ costs indirectly through beneficiaries’ and supplemental insurers’ behavioral responses to altered incentives, according to the studies we reviewed and the experts we spoke to. These studies and experts identified several types of behavioral responses that would influence the net effect of a modernized design on beneficiaries’ out-of- pocket costs, including changes in beneficiaries’ demand for, and insurers’ supply of, supplemental insurance; changes in beneficiaries’ utilization of services; changes in Medicare beneficiaries’ enrollment in FFS versus MA; and interactions among these and other behavioral responses, including effects on the price of supplemental insurance. According to studies we reviewed and experts we spoke to, implementing a modernized cost-sharing design would likely trigger changes in the demand for and supply of supplemental insurance. For example, a focus group of current and future Medicare beneficiaries convened by MedPAC and a report from the American Academy of Actuaries stated that the addition of an annual cap would reduce the need of some beneficiaries to purchase supplemental insurance. While beneficiaries who drop their supplemental insurance would then need to pay all their Medicare cost- sharing responsibilities, those might be less than their annual premium for supplemental insurance. Additionally, according to the same MedPAC study and a Congressional Budget Office report, retiree coverage may change under a modernized design. For example, with a cap in place, there would be less difference between employer-sponsored plans and Medicare, and employers may choose to alter the supplemental insurance they offer. CMS officials told us that this would continue the trend of private employers reducing retiree health coverage. Several studies we reviewed and experts we interviewed indicated that implementing a modernized design could also trigger changes in utilization of Medicare services, the extent of which would affect beneficiaries’ out-of-pocket costs. For example, the RAND Health Insurance Experiment (HIE), which some experts consider to be the most comprehensive study on price and utilization, found that patients were “moderately sensitive to price.” The RAND HIE found that patients respond to increases in cost-sharing that they need to pay at least partly out-of-pocket by decreasing their use of some services. Similarly, CMS officials told us that they would expect utilization to decrease as beneficiaries’ out-of-pocket costs increased, while a study in the American Economic Review found that the addition of a copayment led to a decline in office visits. The RAND HIE study suggests that a 10 percent increase in cost-sharing would lead to a 1 to 2 percent decline in patients’ use of services. In the case of the RAND HIE study, cost- sharing affected the number of contacts people initiated with their physician, which impacted preventive care and diagnostic tests. The study found that this could potentially affect patients’ use of both effective and less effective services. According to several studies and interviews with experts, design changes could trigger other behavioral responses. For example, a study by the Kaiser Family Foundation and a report by the Congressional Budget Office both anticipated that a modernized design could change the proportion of Medicare beneficiaries who decide to enroll in FFS or MA. Similarly, officials from the American Academy of Actuaries told us that they would expect a change in demand for MA under a modernized design. Under the current Medicare design, all MA plans have an annual cap that protects beneficiaries from catastrophic medical expenses. Between 2008 and 2017, the percentage of Medicare beneficiaries who chose to enroll in an MA plan increased from 22 to 33 percent. CMS officials told us that the increases in MA enrollment may be due in part to the requirement that MA plans must include an annual cost-sharing cap. The Kaiser Family Foundation study found that a modernized design, similar to that of an MA plan, might incentivize some MA beneficiaries to move back to FFS. According to experts we interviewed and studies we reviewed, different behavioral responses described above would also likely interact and affect beneficiaries’ out-of-pocket costs. CMS officials told us that when all of the factors contributing to out-of-pocket costs are combined, it is difficult to assess the net effect of a modernized cost-sharing design on beneficiaries’ out-of-pocket costs. For example, officials with the National Association of Insurance Commissioners emphasized that as both demand for supplemental insurance and expected utilization changed, supplemental premiums would also change, which would change out-of- pocket costs. Similarly, studies by both MedPAC and the Congressional Budget Office found that changes in beneficiaries’ level of supplemental insurance might trigger additional changes in utilization, which would also result in changes to the pricing of supplemental insurance. Specifically, if a number of relatively healthy beneficiaries dropped their supplemental insurance, and the beneficiaries left were sicker (that is, more costly), premiums for supplemental insurance might increase. Officials from the Congressional Budget Office told us that, conversely, if the more costly beneficiaries dropped their supplemental insurance, premiums might be lower. Agency Comments We provided a draft of this report to the Department of Health and Human Services for comment. The Department provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to appropriate congressional committees, the Secretary of Health and Human Services, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or cosgrovej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Direct Effect on Medicare Beneficiaries’ Cost-sharing Responsibilities under Four Illustrative Cost-sharing Designs The direct effect of modernizing the Medicare fee-for-service (FFS) cost- sharing design (i.e., the effect when holding utilization and enrollment constant) on beneficiaries’ cost-sharing responsibilities would depend on the specific revisions and the time horizon examined. Tables 4, 5, and 6 present the direct effect of modernizing the Medicare FFS cost-sharing design on beneficiaries’ cost-sharing responsibilities under four illustrative designs. Each table presents the direct effect of each illustrative design over 1-, 4-, and 8-year time horizons. Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Greg Giusto (Assistant Director), Alison Binkowski, George Bogart, Reed Meyer, Beth Morrison, Brandon Nakawaki, and Brian O’Donnell made key contributions to this report. Also contributing were Todd Anderson, Emei Li, Yesook Merrill, Vikki Porter, and Frank Todisco.
Why GAO Did This Study To address concerns with the current Medicare FFS cost-sharing design, various groups have proposed modernizing the design to make it simpler and include features found in private plans. These proposals have generally included a single deductible, modified cost-sharing requirements (e.g., a uniform coinsurance), and the addition of a cap on beneficiaries' annual cost-sharing responsibilities. GAO was asked to review how modernized cost-sharing designs would affect beneficiaries' costs over multiple years. This report describes implications of the current cost-sharing design; options for modernizing; and how modernized cost-sharing designs could directly and indirectly affect beneficiaries' costs. GAO reviewed studies related to modernizing Medicare's cost-sharing design and interviewed authors of those studies and other experts. GAO also used summarized Medicare claims data from 2007 to 2014 (the most recent data available) to develop four illustrative modernized designs, each including a single deductible, uniform coinsurance, and an annual cap while maintaining Medicare program spending similar to the current design. For each design, GAO calculated how beneficiaries' annual cost-sharing responsibilities compared with the current design over a 1-, 4-, and 8-year time horizon. The Department of Health and Human Services provided technical comments on a draft of this report, which GAO incorporated as appropriate. What GAO Found GAO and others have raised concerns about the design of Medicare fee-for-service (FFS) cost-sharing—the portion of costs beneficiaries are responsible for when they receive care. The current cost-sharing design has been largely unchanged since Medicare's enactment in 1965, can be confusing for beneficiaries, and can contribute to overuse of services. Additionally, the design leaves some beneficiaries exposed to catastrophic costs that can exceed tens of thousands of dollars annually. The complexity of the design and lack of an annual cap on cost-sharing responsibilities also increases demand for supplemental insurance, which can cost beneficiaries thousands annually and further contribute to overuse of services. Modernizing Medicare FFS's cost-sharing design to include features found in private plans could help address these concerns, but would involve design trade-offs. For example, adding an annual cap on cost-sharing responsibilities while maintaining Medicare's aggregate share of costs similar to the current design would involve a trade-off between the level of the cap and other cost-sharing requirements. In analyzing four illustrative FFS cost-sharing designs, GAO found that the direct effect of modernizing the design on beneficiaries' cost-sharing responsibilities—that is, the effect when holding utilization and enrollment constant—would depend on the specific revisions and the time horizon examined. For example, GAO found that During year 1, cost-sharing designs that feature relatively low deductibles (costs a beneficiary is responsible for before Medicare starts to pay) and relatively high caps would result in a median annual beneficiary cost-sharing responsibility close to or below that of the current design. In contrast, designs with relatively low caps—and therefore greater beneficiary protection from catastrophic costs—would result in a median annual cost-sharing responsibility above that of the current design. By the end of 8 years, there would still be differences in the median annual beneficiary cost-sharing responsibility across different designs, but they would become less pronounced. Modernizing the Medicare FFS cost-sharing design would also affect beneficiaries' costs indirectly through altered incentives. The studies GAO reviewed and experts GAO interviewed identified several types of behavioral responses that would influence the net effect of a modernized design on beneficiaries' out-of-pocket costs, including changes in beneficiaries' demand for and insurers' supply of supplemental insurance; changes in beneficiaries' use of services; changes in Medicare beneficiaries' enrollment in FFS versus Medicare's private plan alternative; and interactions among these and other behavioral responses, including effects on the price of supplemental insurance.
gao_GAO-19-66
gao_GAO-19-66_0
Background VA Suicide Prevention VA has undertaken a number of initiatives to help prevent veteran suicide, including identifying suicide prevention as VA’s highest clinical priority in its strategic plan for fiscal years 2018 through 2024 (see fig. 2). VA uses CDC’s research on risk factors and prevention techniques to inform its approach to suicide prevention in the veteran community. There is no single determining cause for suicide; instead, suicide occurs in response to biological, psychological, interpersonal, environmental, and societal influences, according to the CDC. Specifically, suicide is associated with risk factors that exist at the individual level (such as a history of mental illness or substance abuse, or stressful life events, such as divorce or the death of a loved one), community level (such as barriers to health care), or societal level (such as the way suicide is portrayed in the media and stigma associated with seeking help for mental illness). According to VA, veterans may possess risk factors related to their military service, such as a service-related injury or a difficult transition to civilian life. CDC reports that protective factors—influences that help protect against the risk for suicide—include effective coping and problem- solving skills, strong and supportive relationships with friends and family, availability of health care, and connectedness to social institutions such as school and community. VA’s 2018 National Strategy for Suicide Prevention identifies four focus areas: (1) healthy and empowered veterans, families, and communities; (2) clinical and community preventative services; (3) treatment and support services; and (4) surveillance, research, and evaluation. Collectively, these four areas encompass 14 goals for preventing veteran suicide, one of which is implementing communication designed to prevent veteran suicide by changing knowledge, attitude, and behaviors. VHA’s suicide prevention media outreach campaign is just one of its initiatives intended to reduce veteran suicide. For example, in 2007, VHA established the Veteran’s Crisis Line (VCL), a national toll-free hotline that supports veterans in emotional crisis. Veterans, as well as their family and friends, can access the VCL by calling a national toll-free number—1-800-273-8255—and pressing “1” to be connected with a VCL responder, regardless of whether these veterans receive health care through VHA. VHA added the option to communicate with VCL responders via online chat in 2009, followed by text messaging in 2011. Another VHA suicide prevention initiative is the Recovery Engagement and Coordination for Health – Veterans Enhanced Treatment initiative, or REACH VET. Established in 2016, REACH VET uses predictive modeling to analyze existing data from veterans’ health records to identify veterans at increased risk for adverse outcomes, such as suicide, hospitalization, or illness. VHA’s Suicide Prevention Media Outreach Campaign Suicide prevention officials within VHA’s Office of Mental Health and Suicide Prevention (OMHSP) are responsible for implementing the suicide prevention media outreach campaign. Since 2010, VHA has used a contractor to develop suicide prevention media outreach content and monitor its effectiveness. In September 2016, VHA awarded a new contract to the same contractor to provide both suicide prevention and mental health media outreach. Under the 2016 contract, the suicide prevention and mental health outreach campaigns remain separate and are overseen by separate suicide prevention and mental health officials, both within OMHSP. VHA officials told us that beginning in fiscal year 2019, VHA will separate the contract for suicide prevention and mental health media outreach. Specifically, VHA will utilize an existing agreement with a different contractor for suicide prevention media outreach while the existing contractor will continue to provide mental health media outreach. According to VHA, the purpose of its suicide prevention media outreach campaign is to raise awareness among veterans, their families and friends, and the general public about VHA resources that are available to veterans who may be at risk for suicide. The primary focus of the outreach campaign since 2010 has been to raise awareness of the services available through the VCL. VHA’s suicide prevention media outreach falls into two main categories: unpaid and paid. Unpaid media outreach content is typically displayed on platforms owned by VA or VHA, or is disseminated by external organizations or individuals that share VHA suicide prevention content at no cost, as discussed below (see fig. 3). Social media. VA and VHA each maintain national social media accounts on platforms such as Facebook, Twitter, and Instagram, and post content, including suicide prevention content developed by VHA’s contractor. VHA also works with other federal agencies, non-governmental organizations, and individuals that post its suicide prevention content periodically. Public service announcements (PSA). VHA’s contractor typically develops two PSAs per year, which various local and national media networks display at no cost to VHA. Website. VHA’s contractor maintains the content displayed on the VCL website (veteranscrisisline.net), including much of the content it develops for other platforms, such as PSAs and social media content. Visitors to the website can both view the content on the website and share it on their own platforms. Paid digital media. An example of paid digital media includes online keyword searches, in which VHA pays a search engine a fee for its website to appear as a top result in response to selected keywords, such as “veterans crisis line” or “veteran suicide.” Paid digital media also includes social media posts for which VHA pays a fee to display its content to a widespread audience, such as users with a military affiliation. Paid “out-of-home” media: “Out-of-home” refers to the locations where this type of content is typically displayed. Examples include billboards, bus and transit advertisements, and local and national radio commercials. VHA recognizes September as Suicide Prevention Month each year. During this month, VHA establishes a theme and increases its outreach activities, including a combination of both paid and unpaid media outreach. According to VHA, it typically incorporates additional outreach techniques during this month, such as enlisting the support of celebrities or hosting live chat sessions on social media platforms, including Facebook and Twitter. VHA’s Suicide Prevention Media Outreach Activities Declined in Recent Years Due to Leadership Turnover and Reorganization VHA’s Suicide Prevention Media Outreach Activities Declined in Fiscal Years 2017 and 2018 VHA’s suicide prevention media outreach activities declined in fiscal years 2017 and 2018 compared to earlier years of the campaign. We identified declines in social media postings, PSAs, paid media, and suicide prevention month activities, as discussed below. Social media. The amount of social media content developed by VHA’s contractor decreased in 2017 and 2018, after increasing in each of the prior four years. Specifically, VHA reported that its contractor developed 339 pieces of social media content in fiscal year 2016, compared with 159 in fiscal year 2017, and 47 during the first 10 months of fiscal year 2018 (see fig. 5.). PSAs. VHA’s contractor is required to develop two suicide prevention PSAs in each fiscal year. VHA officials said that the development of the two PSAs was delayed in fiscal year 2018. Specifically, as of August 2018, VHA reported that one PSA was completed, but had not yet aired, and another PSA was in development. As a result of this delay, VHA had not aired a suicide prevention PSA on television or radio in over a year; this is the first time there has been a gap of more than a month since June 2012. Paid media. VHA had a total budget of $17.7 million for its suicide prevention and mental health media outreach for fiscal year 2018, of which $6.2 million was obligated for suicide prevention paid media. As of September 2018, VHA said it had spent $57,000 of its $6.2 million paid media budget. VHA officials estimated that they would spend a total of $1.5 million on suicide prevention paid media for fiscal year 2018 and indicated that the remaining funds would be de-obligated from the contract at the end of the fiscal year and not used for suicide prevention media outreach. VHA officials indicated that the reason they did not spend the remaining funds on suicide prevention paid media in fiscal year 2018 was that the approval of the paid media plan was delayed due to changes in leadership and organizational realignment of the suicide prevention program. As a result, VHA officials said they limited the paid media outreach in fiscal year 2018 to activities that were already in place, including 25 keyword search advertisements, and 20 billboards and 8 radio advertisements in selected cities across the United States. In prior fiscal years, VHA conducted a variety of digital and out-of- home suicide prevention paid media. For example, in fiscal year 2015, with a suicide prevention paid media budget of more than $4 million, VHA reported that it ran 58 advertisements on Google, Bing, and Facebook, and ran 30 billboards, 180 bus advertisements, more than 19,000 radio advertisements, 252 print advertisements, and 39 movie theatre placements in selected cities across the United States. VHA ran similar types of paid media in fiscal years 2013, 2014, and 2016 with variation in quantities based on the approved budget in each of these years. In fiscal year 2017, VHA had a budget of approximately $1.7 million to spend on paid media for both the suicide prevention and mental health outreach campaigns. However, VHA spent less than 10 percent of the funds (approximately $136,000) to run paid advertisements on Google and Bing for suicide prevention in fiscal year 2017; the remainder was spent on mental health outreach. Suicide Prevention Month. VHA documentation indicated that Suicide Prevention Month 2017 was a limited effort. VHA officials said that this was because they did not begin preparing early enough. In May 2018, VHA officials indicated that they were similarly behind schedule for planning Suicide Prevention Month 2018, though they told us in August 2018 that they had caught up. VHA Leadership Turnover and Reorganization Resulted in the Decline in Suicide Prevention Media Outreach Activities VHA officials told us that the decrease in suicide prevention media outreach activities was due to leadership turnover and reorganization since 2017. For example, VHA officials said the National Director for Suicide Prevention position was vacant from July 2017 through April 2018. VHA filled the role temporarily with a 6-month detail from another agency from October 2017 through March 2018 and then hired this individual as the permanent director on April 30, 2018. VHA officials that worked on the campaign told us they did not have leadership available to make decisions about the suicide prevention campaign during this time. For example, VHA officials said they did not have a kick-off meeting between VHA leadership and VHA’s contractor at the beginning of fiscal year 2018—a requirement of the contract—because there was no leadership available to participate in this meeting. The officials also reported that suicide prevention leadership was not available for weekly meetings to discuss suicide prevention outreach activities, even after the suicide prevention program obtained an acting director on detail from another agency. VHA staff said that at that time, they focused their suicide prevention media outreach efforts on areas that did not require leadership input, such as updating the VCL website. The absence of leadership available to provide direction and make decisions on the suicide prevention media outreach campaign is inconsistent with federal internal control standards for control environment, which require agencies to assign responsibilities to achieve its objectives. If a key role is vacant, management needs to determine by whom and how those responsibilities will be fulfilled in order to meet its objectives. Officials that worked on the campaign told us they shifted their focus away from the suicide prevention media outreach campaign toward the mental health outreach campaign due to reorganization of the offices responsible for suicide prevention activities in 2017. Specifically, under the new organization, and in the absence of suicide prevention program leadership, the officials began reporting directly to mental health program leadership and became more focused on the mental health outreach aspects of the contract. Following the reorganization, officials that worked on the campaign did not have a clear line of reporting to the suicide prevention program. This is also inconsistent with federal internal control standards for control environment, which require agencies to establish an organizational structure and assign responsibilities, such as establishing lines of reporting necessary information to management. VHA officials told us that one of the highest priorities for the suicide prevention program since the beginning of fiscal year 2018 was to establish a national strategy for preventing veteran suicides. The national strategy, issued in June 2018, includes suicide prevention outreach as one of the strategy’s 14 goals. The national strategy also emphasizes VHA’s plans to shift to a public health approach to suicide prevention outreach. The public health approach focuses less on raising awareness of the VCL and more on reaching veterans before the point of crisis. VHA officials told us they have been trying to shift to a public health approach since 2016. Some of the campaign themes and messages have reflected this shift; for example, the “Be There” campaign theme that was adopted in fiscal year 2016—and has remained the theme since— emphasizes the message that everyone has a role in helping veterans in crisis feel less alone and connecting them to resources. However, VHA officials told us in May 2018 that they were just beginning to conceptualize what the suicide prevention outreach campaign should look like moving forward. Leadership officials also said that while they were developing the national strategy, they delegated the responsibility for implementing the suicide prevention outreach campaign to other officials working on the campaign. The decline in VHA’s suicide prevention media outreach activities over the past 2 fiscal years is inconsistent with VA’s strategic goals, which identify suicide prevention as the agency’s top clinical priority for fiscal years 2018 through 2024. Further, VHA has continued to obligate millions of dollars to its suicide prevention media outreach campaign each year. Since fiscal year 2017, VHA has obligated $24.6 million to the contract for media outreach related to both suicide prevention and mental health. By not assigning key leadership responsibilities and clear lines of reporting, VHA’s ability to oversee the suicide prevention media outreach activities was hindered and these outreach activities decreased. As a result, VHA may not have exposed as many people in the community, such as veterans at risk for suicide, or their families and friends, to its suicide prevention outreach content. Additionally, without establishing responsibility and clear lines of reporting, VHA lacks assurance that it will have continuous oversight of its suicide prevention media outreach activities in the event of additional turnover and reorganization in the future, particularly as VHA begins implementing the suicide prevention media outreach campaign under its new agreement that begins in fiscal year 2019. VHA Monitors Metrics for Its Suicide Prevention Media Outreach Campaign, but Has Not Established Targets against Which to Evaluate the Campaign’s Effectiveness VHA Monitors Metrics for Its Suicide Prevention Media Outreach Campaign VHA works with its contractor to create and monitor metrics to help gauge the effectiveness of its suicide prevention media outreach campaign in raising awareness among veterans and others about VHA services, such as the VCL. The metrics primarily focus on the number of individuals who were exposed to or interacted with VHA’s suicide prevention content across various forms of outreach, including social media, PSAs, and websites. According to VHA, the metrics are intended to help VHA ensure that its media outreach activities achieve intended results, such as increasing awareness and use of the resources identified on the VCL website. Examples of metrics monitored by VHA and its contractor include those related to (1) social media, such as the number of times a piece of outreach content is displayed on social media; (2) PSAs, such as the total number of markets and television stations airing a PSA; and (3) the VCL website, such as the total traffic to the website, as well as the average amount of time spent on a page and average number of pages viewed per visit. VHA’s contractor is required to monitor the metrics and report results on a monthly basis. Specifically, the contractor provides monthly monitoring reports to VHA that summarize how outreach is performing, such as the number of visits to the VCL website that were driven from paid media sources. Officials noted these reports are key sources of information for VHA on the results of its outreach. VHA officials also told us they informally discuss certain metrics during weekly meetings with VHA’s contractor. In addition, VHA works with its contractor to conduct a more in-depth analysis of outreach efforts during and after Suicide Prevention Month each year. VHA Lacks Metric Targets to Evaluate the Effectiveness of Its Suicide Prevention Media Outreach Campaign VHA has not established targets for the majority of the metrics it uses to help gauge the effectiveness of its suicide prevention media outreach campaign. As a result, VHA does not have the information it needs to fully evaluate the campaign’s effectiveness in raising awareness of VHA’s suicide prevention resources among veterans, including the VCL. For example, we found that VHA’s contractor’s monitoring reports—a summary of key metrics that VHA uses to routinely monitor information regarding the campaign—generally focused on outreach “highlights” and positive results. The reports did not set expectations based on past outreach or targets for new outreach, and lacked more comprehensive information on whether outreach performed against these expectations. For example: A monitoring report from 2018 showed that during one month, there were 21,000 social media mentions of keywords specific to VA suicide prevention, such as “VCL” or “veteran suicide,” across social media platforms. These mentions earned 120 million impressions; however, there was no indication of the number of keyword mentions or impressions that VHA expected based on its media outreach activities. In addition, the report did not indicate the proportion of mentions that VHA believed were specifically driven by its outreach activities, and there also was no indication of whether these mentions were positive or negative, or what actions to take based on this information. Another monitoring report from January 2017 showed that paid advertising drove 39 percent of overall website traffic during one month, while unpaid sources drove the remaining 61 percent. However, there was no information indicating the amounts of paid advertising that VHA conducted during this monitoring period, and whether this amount of website traffic from paid advertising met expectations. VHA’s 2016 Suicide Prevention Month summary report showed that there were 194,536 visits to the VCL website, roughly an 8 percent increase from the Suicide Prevention Month in 2015. However, the report did not indicate whether this increase from the prior year met expectations, or a different result was expected. VHA officials told us that they have not established targets for most of the suicide prevention media outreach campaign because they lack meaningful targets for the metrics to help evaluate the campaign. VHA officials said that the only target they have established is for each PSA to rank in the top 10 percent of the Nielsen ratings because this is the only meaningful target available that is accepted industry-wide. VHA officials stated that using any other targets would be arbitrary. For the remaining metrics, VHA officials told us they assess the outcomes of their campaign by comparing data from year to year, and examining any changes in the outcomes over time. However, VHA could set targets that capture the number of people who viewed or interacted with its outreach content, similar to its Nielsen target set for television viewership. Doing so would help VHA evaluate whether the campaign has been effective in raising awareness of VHA’s suicide prevention resources. Further, creating targets for these additional metrics need not be arbitrary, because VHA could use information about how its metrics performed in the past to develop reasonable and meaningful targets for future performance. VHA could also adjust the targets over time to reflect changes in its metrics or approach to the campaign, such as changes to its paid media budget each year. Federal internal control standards for monitoring require agencies to assess the quality of its performance by evaluating the results of activities. Agencies can then use these evaluations to determine the effectiveness of its programs or need for any corrective actions. Further, VA’s June 2018 National Strategy for Preventing Veteran Suicide also emphasizes the importance of the agency evaluating the effectiveness of its outreach. The absence of established targets leaves VHA without a framework to effectively evaluate its campaign. Our prior work has shown that establishing targets allows agencies to track their progress toward specific goals. In particular, we have developed several key attributes of performance goals and measures including, when appropriate, the development of quantifiable, numerical targets for performance goals and measures. Such targets can facilitate future evaluations of whether overall goals and objectives were achieved by allowing for comparisons between projected performance and actual results. Further, establishing targets for its outreach metrics will enable VHA officials to determine whether outreach performed as expected and raised awareness of VHA resources such as the VCL, including identifying outreach efforts that worked particularly well and those that did not. In doing so, VHA officials will have the opportunity to make better informed decisions in their suicide prevention media outreach campaign to support VA’s overall goal of reducing veteran suicides. Conclusions VA has stated that preventing veteran suicide is its top clinical priority; yet VHA’s lack of leadership attention to its suicide prevention media outreach campaign in recent years has resulted in less outreach to veterans. While VHA identifies the campaign as its primary method of raising suicide prevention awareness, it has not established an effective oversight approach to ensure outreach continuity. This became particularly evident during a recent period of turnover and reorganization in the office responsible for the suicide prevention outreach campaign. Moving forward, VHA has an opportunity to improve its oversight to ensure that its outreach content reaches veterans and others in the community to raise awareness of VHA’s suicide prevention services, particularly as VHA begins working with a new contractor beginning in fiscal year 2019. VHA is responsible for evaluating the effectiveness of its suicide prevention media outreach campaign in raising awareness about VHA services that are available to veterans who may be at risk for suicide. To do so, VHA collects and monitors data on campaign metrics to help gauge the effectiveness of its suicide prevention media outreach campaign in raising such awareness, but has not established targets for the majority of these metrics because officials reported that there are no meaningful, industry-wide targets for them. We disagree with VHA’s assertion that other targets would not be meaningful; VHA collects data on its metrics that it can use to develop reasonable and meaningful targets for future performance. In the absence of established targets, VHA cannot evaluate the effectiveness of the campaign, and make informed decisions about which activities should be continued to support VA’s overall goal of reducing veteran suicides. Recommendations for Executive Action We are making the following two recommendations to VA: 1. The Under Secretary for Health should establish an approach for overseeing its suicide prevention media outreach efforts that includes clear delineation of roles and responsibilities for those in leadership and contract oversight roles, including during periods of staff turnover or program changes. (Recommendation 1) 2. The Under Secretary for Health should require officials within the Office of Suicide Prevention and Mental Health to establish targets for the metrics the office uses to evaluate the effectiveness of its suicide prevention media outreach campaign. (Recommendation 2) Agency Comments and Our Evaluation We provided a draft of this report to VA for review and comment. In its written comments, summarized below and reprinted in Appendix I, VA concurred with our recommendations. VA described ongoing and planned actions and provided a timeline for addressing our recommendations. VA also provided technical comments, which we incorporated as appropriate. In response to our first recommendation, to establish an oversight approach that includes delineation of roles and responsibilities, VA acknowledged that organizational transitions and realignments within OMHSP contributed to unclear roles and responsibilities in 2017 and 2018. VA said that OMHSP has made organizational improvements, including hiring a permanent Director for Suicide Prevention and establishing a new organizational structure. In its comments, VA requested closure of the first recommendation based on these actions. However, to fully implement this recommendation, VA will need to provide evidence that it has established an oversight approach for the suicide prevention media outreach campaign. This would include providing information about the roles and responsibilities, as well as reporting requirements, for contract and leadership officials involved in the suicide prevention media outreach campaign under the new organizational structure and the new contract. VA will also need to demonstrate that it has a plan in place to ensure continued oversight of the suicide prevention media campaign in the event of staff turnover or program changes. In response to our second recommendation, to establish targets against which to evaluate suicide prevention metrics, VA said it has plans to work with communications experts to develop metrics, targets, and an evaluation strategy to improve its evaluation of its suicide prevention program efforts, including outreach. VA expects to complete these actions by April 2019. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees and the Secretary of Veterans Affairs. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at DraperD@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix I. Appendix I: Comments from the Department of Veterans Affairs Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Marcia A. Mann (Assistant Director), Kaitlin McConnell (Analyst-in-Charge), Kaitlin Asaly, and Jane Eyre made key contributions to this report. Also contributing were Jennie Apter, Emily Bippus, Valerie Caracelli, Lisa Gardner, Jacquelyn Hamilton, Teague Lyons, Vikki Porter, and Eden Savino.
Why GAO Did This Study Veterans suffer a disproportionately higher rate of suicide than the civilian population. VA has estimated that an average of 20 veterans die by suicide per day, and in 2018, VA identified suicide prevention as its highest clinical priority. VHA's suicide prevention media outreach campaign—its collective suicide prevention outreach activities—helps raise awareness among veterans and others in the community about suicide prevention resources. VHA has contracted with an outside vendor to develop suicide prevention media outreach content. GAO was asked to examine VHA's suicide prevention media outreach campaign. This report examines the extent to which VHA (1) conducts activities for its suicide prevention media outreach campaign, and (2) evaluates the effectiveness of its campaign. GAO reviewed relevant VHA documents and data on the amount, type, and cost of suicide prevention outreach activities since fiscal year 2013. GAO also reviewed VHA's contract for developing suicide prevention outreach content and interviewed VA and VHA officials. What GAO Found The Department of Veterans Affairs' (VA) Veterans Health Administration (VHA) conducts national suicide prevention media outreach on various platforms to raise awareness about VHA's suicide prevention resources. The primary focus of this campaign since 2010 has been to raise awareness of the Veterans Crisis Line (VCL), VHA's national hotline established in 2007 to provide support to veterans in emotional crisis. GAO found that VHA's suicide prevention media outreach activities declined in recent years due to leadership turnover and reorganization. For example, the amount of suicide prevention content developed by VHA's contractor for social media decreased in fiscal years 2017 and the first 10 months of 2018 after increasing in each of the 4 prior years. VHA officials reported not having leadership available for a period of time to make decisions about the suicide prevention media outreach campaign. GAO found that VHA did not assign key leadership responsibilities or establish clear lines of reporting, and as a result, its ability to oversee the outreach campaign was hindered. Consequently, VHA may not be maximizing its reach with suicide prevention media content to veterans, especially those who are at-risk. VHA evaluates the effectiveness of its suicide prevention media outreach campaign by collecting data on metrics, such as the number of people that visit the VCL website. However, VHA has not established targets for the majority of these metrics. Officials said they have not established targets because, apart from one industry-wide target they use, they lack meaningful targets for evaluating the campaign. However, VHA could use information about how its metrics performed in the past to develop reasonable and meaningful targets for future performance. Without established targets for its metrics, VHA is missing an opportunity to better evaluate the effectiveness of its suicide prevention media outreach campaign. What GAO Recommends VHA should (1) establish an approach to oversee its suicide prevention media outreach campaign that includes clear delineation of roles and responsibilities, and (2) establish targets for its metrics to improve evaluation efforts. VA concurred with GAO's recommendations and described steps it will take to implement them.
gao_GAO-18-127
gao_GAO-18-127_0
Background Safety defect vehicle recalls (auto recalls) are initiated when a defect in a vehicle or vehicle equipment creates an unreasonable safety risk, as determined by NHTSA or a manufacturer. After a recall is initiated, manufacturers are required to provide written notification to vehicle owners via First-Class Mail within 60 days and remedy the defect. Franchised dealers—which sell or lease an auto manufacturer’s new vehicles—perform the recall remedy. Before manufacturers send recall notification letters to affected vehicle owners, NHTSA reviews draft letters and envelopes to ensure they include required information about the safety defect. Required information includes, among other things, a clear description of the safety defect, an evaluation of the risk to vehicle safety, and a statement that the manufacturer will remedy the defect without charge. See appendix II for an example of a notification letter. The number of vehicles affected by safety defect vehicle recalls has increased dramatically since 2011 (see fig. 1). The increase reflects, in part, several large-scale recalls. For example, in 2014, General Motors initiated a recall of over 8 million vehicles with faulty ignition switches. Similarly, according to NHTSA in 2014 and 2015, some passenger vehicle manufacturers began recalling Takata air bag inflators, recalls that have grown to include approximately 34 million vehicles and 19 auto manufacturers. For the Takata recall, NHTSA issued various orders and established a Coordinated Remedy Program under which the agency oversees the supply of remedy parts and risk-based prioritization of vehicles for repair, and manages related recalls with the assistance of an Independent Monitor. The Independent Monitor assesses compliance with the applicable orders issued by NHTSA and makes recommendations aimed at enhancing the remedy program. According to NHTSA’s Strategic Plan 2016–2020, this unprecedented recall activity encouraged the agency to improve its system for identifying and addressing defective vehicles. For example, the plan states that NHTSA’s “vision is to achieve a 100-percent completion rate for every recall by improving communication at every level, at every step of the way.” Thus, according to the plan, NHTSA and the auto industry have committed to identifying and implementing effective strategies to inform consumers of safety defects and envision that their coordination will bolster recall efforts to improve completion rates. NHTSA reported that annual completion rates for passenger vehicle recalls have remained relatively flat, ranging from 63 to 67 percent between calendar year 2011 and calendar year 2014. See appendix III for completion rates by vehicle component and vehicle type. In part, to improve communication and encourage consumers to complete repairs, NHTSA and manufacturers provide auto recall information to the public on their websites. For example, certain motor vehicle manufacturers are required to allow consumers to search a vehicle’s recall remedy status on the Internet using the vehicle identification number (VIN). NHTSA also provides publicly available auto recall information on its website, including examples of recall notification letters. In December 2016, NHTSA began consolidating its websites into NHTSA.gov to provide a single access point for its auto recall content. One of these websites, safercar.gov, was once NHTSA’s primary method of communicating auto recall information to consumers; however, the agency is in the process of moving this information to NHTSA.gov. NHTSA’s Strategic Plan 2016–2020 states that the agency wants NHTSA.gov to be a comprehensive user-friendly platform that serves as the premier source of vehicle safety information by, for example, improving the website’s search capabilities. NHTSA also aims to encourage consumers to use its website’s auto recall information through its communications program. NHTSA’s Office of Communications and Consumer Information (OCCI) is the primary office responsible for implementing the agency’s public communication efforts. OCCI intends to increase public engagement with the agency’s information through its social media channels, such as Instagram, Twitter, and Facebook. The amount OCCI obligated to support the agency’s auto recall efforts has increased from nearly $.5 million in fiscal year 2011 to about $2.5 million in fiscal year 2016. According to NHTSA officials, these obligations supported various efforts, including public awareness campaigns, an auto recall hotline, advertising agencies, exhibits at auto shows, and NHTSA’s mobile application. Auto Recall Information Use Varies, and Most Consumers in Focus Groups Preferred Electronic Recall Notifications in Addition to Mail Consumers in Our Focus Groups Primarily Considered Safety Risk and Convenience when Using Auto Recall Information to Make Repair Decisions As part of our focus group discussion sessions, consumers selected safety risk and convenience as the two most influential factors they considered when using auto recall information to decide whether to complete repairs. All factors considered: During each session, we first asked consumers to describe all the factors they considered. Across the sessions, consumers shared a wide variety of factors including availability of a loaner vehicle, time to schedule and complete the repair, safety risk, and other factors. For example, some consumers had not yet repaired their vehicles because they were “just waiting” for parts to become available. Other consumers considered their previous customer service experiences at the franchised dealership or the distance they would need to travel to complete the repair. For example, one consumer at our rural focus group location told us it would take roughly 2 hours to reach the dealership’s repair shop. Most influential factors considered: After the discussion of all factors, we then asked each consumer to select the single most influential factor they considered. Consumers in the sessions overwhelmingly selected safety risk and convenience as the two most influential factors (see table 1). More than half of consumers in our focus group discussion sessions selected safety risk as the most influential factor they considered when making repair decisions. They told us that their perception of the risk influenced whether or not they repaired their vehicle. For instance, some consumers stated that they completed repairs immediately, because the risks “sounded serious” or that they considered the defect a “safety concern.” Conversely, some consumers said they did not complete the repairs because the defect “didn’t sound very urgent.” While each recall notification letter is required to include an evaluation of the risk to vehicle safety reasonably related to the defect, consumers in our focus group sessions shared mixed opinions about the quality and clarity of safety risk information included in the notification letter they received. For example, some consumers told us the letter’s safety risk information seemed vague. For instance, one consumer told us the letter’s description of the safety defect did not clearly state the chances of an increased risk of injury and so he “had to figure out on his own.” In addition, some consumers commented that the safety risk information could be more prominent in the notification letter, that the letter could emphasize the severity of the risk, or that the letter could describe the risk in simpler language. However, other consumers stated the notification letter they received adequately described the recall’s safety risk. In June 2011, we recommended that NHTSA modify the requirements for defect notification letters to include additional information to obtain readers’ attention. In 2013, NHTSA responded to our recommendation by requiring manufacturers to include the statement “IMPORTANT SAFETY RECALL” at the top of auto recall notification letters. Focus Group Participant’s Comment “I don't want to be without a car for half the day or stay with my kids all day.” Consumers in our focus group discussion sessions selected convenience as the second most influential factor they considered in making repair decisions. While some consumers described the “hassle” of the repair and being “too busy” to schedule and fix the defect, other consumers told us they repaired their vehicles more easily because, for example, they could take advantage of previously scheduled service appointments to also repair the defect. Also, some consumers in our sessions stated that the letter or notification they received could better address the inconvenience of the recall, for example by including better estimates of how long repairs might take. In addition, some consumers recommended the letter include options for scheduling needed repairs. As we discuss later in the report, NHTSA officials told us they continue to work with auto manufacturers to identify ways to encourage consumers to complete needed repairs, while representatives from some manufacturers we met with described specific steps they have taken to address some of the inconveniences consumers may experience in completing repairs. For example, one manufacturer facilitated a pilot program for a third-party service provider in conjunction with dealers to repair vehicles at the owner’s home or place of work, while another manufacturer told us they work with individual dealers to hold events specifically for recall repairs when consumers can come in to have repairs performed after normal business hours. Industry Stakeholders’ Use of Publicly Available Auto Recall Information Varies Industry stakeholders’ use of auto recall information varies because these stakeholders play different roles in the auto recall process. Auto manufacturers are primarily responsible for providing auto recall information to the public and others, including NHTSA and auto dealers. Franchised dealers are responsible for performing the recall remedy for manufacturers and therefore use manufacturer-provided information for that purpose. Specifically, all of the franchised dealers we interviewed told us they identify recalls on new vehicles in their inventory primarily by accessing auto recall information through internal manufacturer databases. These franchised dealers may also use information from third-party providers or publicly available auto recall information on NHTSA’s website to identify recalls affecting used vehicles. Independent dealers—which are not generally authorized by manufacturers to perform recall remedies—may use publicly available auto recall information to identify open recalls. Specifically, 2 of the 3 independent dealers we met with told us they use NHTSA’s VIN look-up tool to search for open recalls affecting vehicles in their inventory before selling them to consumers. However, these dealers told us that the current design of the tool takes too much time to use, because it requires users to search each VIN individually. For example, one dealer told us each search took about 15 seconds to perform, resulting in significant time and cost because the dealership has tens of thousands of vehicles in its inventory. These dealers told us being able to search multiple VINs in a single search (i.e., VIN-batch search) could save them time or money. Representatives from the Alliance of Automobile Manufacturers stated they—in coordination with other industry stakeholders—are working with a third-party provider to develop a search tool that would address this concern by enabling VIN-batch searches for use by government agencies, such as state departments of motor vehicles, and commercial entities. The group anticipates the tool will be available in the first half of 2018. Most Consumers in Our Focus Groups Prefer to Receive Recall Notification by Electronic Means in Addition to Mail Although the vast majority of consumers who participated in our focus group discussion sessions reported a preference to receive auto recall notification by mail, most preferred to receive notifications by at least one additional electronic means such as e-mail, phone calls, and text messages. Eighty of the 94 consumers in our sessions reported a preference for receiving notification by mail, and all but 4 reported actually receiving mailed notification (see fig. 2). However, 69 of the 94 consumers in our sessions also reported a preference for receiving recall notification by electronic means, but only 7 reported actually receiving at least one type of electronic notification. This result suggests a gap between industry recall notification practices and notification preferences for most consumers in our focus groups, especially for younger consumers who were more likely to report a preference for notification by electronic means. For complete results of the questionnaire we administered to consumers for the discussion session, see appendix IV. As we discuss later in this report, in September 2016, NHTSA issued a Notice of Proposed Rulemaking (NPRM) that proposes to require auto manufacturers to notify consumers about auto recalls by electronic means in addition to First-Class Mail. NHTSA officials told us the agency is working with the administration on NHTSA’s regulatory portfolio and priorities, including this rulemaking. Some manufacturers told us they use additional methods to reach consumers, including notifying consumers by electronic means and translating recall information into Spanish. For example, representatives from one manufacturer told us they always notify consumers by e-mail before sending out the required First-Class Mail letter notification. These representatives told us using multiple recall notification means resulted in higher recall completion rates. In addition, eight of the remaining nine manufacturers told us they use supplemental electronic means notification on a case-by-case basis—generally using additional means to improve recall completion rates—while four manufacturers stated they consider safety risk severity when deciding when or how to use additional notification means for individual recalls. Also, representatives from 3 of the 10 manufacturers we spoke with told us they translate the entire mailed notification letter into Spanish. Most Consumers in Focus Groups Found the Auto Recall Areas of NHTSA.gov Generally Easy to Use, but Some Experienced Difficulties Usability Testing with Consumers Found the Auto Recall Areas of NHTSA’s Website Generally Easy to Use In late 2016, NHTSA launched its redesigned NHTSA.gov website, including the auto recall areas consumers assessed during our testing sessions. According to responses to a questionnaire we administered during our testing sessions, 78 of the 94 consumers found the auto recall areas of NHTSA.gov either “somewhat” or “very easy” to use (see fig. 3). See appendix V for complete participant responses to the questionnaire we administered to each consumer. To inform the development of the redesigned website, NHTSA worked with a contractor to conduct a usability study in 2015 to evaluate users’ reactions to the agency’s websites, including NHTSA.gov. According to agency officials, NHTSA implemented several changes based on the findings from the usability study, including: the creation of a dedicated “recalls” area of NHTSA.gov, and the ability for users to access the VIN look-up tool in three different ways—on the homepage, in the “recalls” area, and through direct links either in a NHTSA e-mail for subscribers or from an external website. In addition, NHTSA officials told us that Department of Transportation (DOT) and NHTSA staff meet as needed to discuss the website and consider improvements. For example, the officials said they monitor user searches for the relevance and accuracy of results and adjust the search software to better assist users in finding auto recall information. Officials also told us the agency collects a variety of other information about how visitors use NHTSA.gov, including how visitors access the website, and makes adjustments accordingly. For instance, NHTSA incorporated responsive web design as part of the agency’s ongoing consolidation effort—meaning the site is optimized for viewing on desktop, tablet, and mobile devices. In addition to monitoring searches and how visitors access NHTSA.gov, NHTSA officials told us they collect and consider online survey data to make website improvements and use web-analytic software to monitor, for example, where visitors choose to exit the website. Officials stated that such monitoring activities have allowed NHTSA to identify and correct problems with NHTSA.gov. We did not directly evaluate the accessibility of the auto recall areas of NHTSA.gov to ensure the ability of people with physical or mental disabilities to use the website. However, NHTSA officials provided us with an overview of several steps the agency takes to ensure NHTSA.gov complies with website accessibility requirements. For example, according to officials, NHTSA subscribes to a service that provides monthly accessibility scans of the agency’s websites. Consumers in Our Focus Groups Identified Opportunities to Improve the Usability of Certain Auto Recall Tasks on NHTSA’s Website While most consumers in our usability testing sessions generally found the auto recall areas of NHTSA’s website easy to use, some consumers experienced difficulties completing tasks we asked them to perform (see table 2). Specifically, during each testing session we asked participants to perform tasks using the primary means NHTSA.gov provides for consumers to access information about auto recalls affecting their vehicles: searching for auto recalls using their vehicle’s VIN; searching for auto recalls using their vehicle’s year, make, and model; and locating NHTSA’s auto recall notification e-mail subscription service. In addition, an evaluation we requested to corroborate the results of our consumer usability testing, identified similar issues. As discussed below, consumers experienced these difficulties because the auto recall areas of NHTSA.gov do not always reflect federal and industry key website usability practices, which describe standards and guidelines for making websites easy to use. Following such practices can assist agencies in creating quality websites while providing the flexibility necessary to meet organizational needs. Website usability is particularly important for agencies, such as NHTSA, that are responsible for conveying safety information to the public. Federal standards for internal control state that agencies should communicate quality information externally and select appropriate methods for communicating with the public. While most consumers in our usability testing sessions found searching for recalls by VIN somewhat or very easy, some consumers found the search results did not provide the information they were seeking. When we asked consumers to perform VIN searches, they generally found the VIN look-up tool easy to use—88 of 94 consumers found searching with a VIN either somewhat or very easy. But some consumers experienced difficulties performing this task. Specifically, some consumers who had had their vehicles repaired expected to find the completed recall on the search results page. However, they were confused because the page is designed to display only open (i.e., unrepaired) recalls, not completed (i.e., repaired) recalls—leading these consumers to question the accuracy of the results. In addition, the evaluation conducted by website usability professionals found that, when an error occurred during a VIN search, the error message was too difficult to locate on the search results page. The evaluation recommended the error message have greater weight and more prominence on the page. Federal key website usability practices state that agencies should ensure that results of user searches provide the precise information being sought, and in a format matching users’ expectations. When users are confused by search results, or do not immediately find what they are searching for, they become frustrated and may abandon the search or the website entirely. Since NHTSA launched the VIN look-up tool in August 2014, the number of VIN searches performed has increased (see fig. 4). According to NHTSA officials, major increases occurred in mid-2015— when the Takata air bag inflator recalls were announced—and in early 2017, when NHTSA made the VIN look-up tool search function available on NHTSA.gov and displayed it prominently on the website. Ensuring the usability of NHTSA’s VIN look-up tool is particularly important because it is the only way on NHTSA.gov for a consumer to determine whether their specific vehicle has an open safety recall. Recall Search Using Vehicle Year, Make, and Model Some consumers’ vehicle year, make, and model searches were hampered by the information required to conduct an accurate search, as the content on the website is not always in plain language. We asked consumers to perform a recall search using their vehicles’ year, make, and model, and 78 of 94 consumers found the task to be either somewhat or very easy. However, some consumers found that they did not know enough information about their specific vehicles to feel confident that they were searching for the correct vehicle. For example, a year, make, and model search for a 2009 Toyota Tacoma may ask the consumer to choose among vehicle options, including “2009 TOYOTA TACOMA REGULAR CAB W/SAB RWD/AWD.” Acronyms such as “W/SAB”— which stands for “with side air bags”—may be confusing to consumers. Federal key website usability practices state that federal agencies should write website content using plain language, so website visitors can easily find and use what they need. Focus Group Participant’s Comment “I think [the Recall Notification E-mail System Sign-Up is] poorly placed. I had to scroll to find it. I had to search for it. You want at the top .” Recall Notification E-mail System Sign-Up Some consumers suggested improvements to make the Recall Notification E-mail System Sign-Up easier to locate on the homepage. NHTSA first made its Recall Notification E-mail System Sign-Up available in March 2008. Of the 94 consumers in our testing sessions, 66 found it either “somewhat” or “very easy” to find the Recall Notification E-mail System Sign-Up—making this the least easy of the three tasks we asked consumers to perform. Specifically, several consumers said the Recall Notification E-mail System Sign-Up should include a clearer description, be easier to find, and be located at the top of the homepage (see fig. 5). These improvements are particularly important because some consumers in our focus group sessions told us that the ability to sign up for auto recall e-mail notifications was the most useful part of the auto recall areas of NHTSA.gov. The website evaluation conducted by website usability professionals recommended that NHTSA streamline its homepage with more of a focus on primary website tasks. The evaluation also found that users must move through too many pages to sign up for recall e-mails. Federal key website design and usability practices state that agencies should put important items closer to the top of the page, where users can better locate the information. Key practices also state that agencies should design their websites so users can successfully complete the most common tasks in the fewest number of steps. The website usability difficulties that consumers in our focus groups experienced may be due to the fact that NHTSA has not studied the website’s usability since the agency redesigned NHTSA.gov in late 2016 and, therefore, may have been unaware of these difficulties prior to our review. NHTSA plans to conduct a website usability study with consumers after the consolidation effort, discussed above, is complete. However, NHTSA could not provide a general time frame for conducting the study because it has not yet determined when the consolidation effort will be complete. We have previously reported that it is essential for organizations to effectively guide their information technology efforts by establishing timelines to complete them, among other strategic planning best practices. Without establishing a completion date for its website consolidation effort, the website usability difficulties we identified may persist and limit the effectiveness of NHTSA’s primary means of providing consumers with safety recall information about their vehicles on NHTSA.gov. NHTSA Has Initiated Activities to Raise Consumer Awareness about Recalls, but It Is Too Early to Evaluate the Agency’s Efforts Public Awareness Campaign In January 2016, NHTSA launched a national advertising campaign encouraging consumers to check for open recalls using the agency’s VIN and year, make, and model look-up tools. Through March 2017, NHTSA spent about $1 million on its Safe Cars Save Lives campaign, which sponsors advertisements on Google, Facebook, and other media platforms. For example, Google might place NHTSA’s advertisement above other search results, when a consumer typed certain keywords— such as “recall,” “airbag recalls,” or “safercar.gov”—into the search. NHTSA evaluated the campaign’s effectiveness by monitoring website traffic performance reports to determine how frequently consumers clicked on NHTSA-sponsored advertisements and ultimately searched for open recalls using the agency’s look-up tools. NHTSA also compared results across media platforms and adjusted the campaign’s strategy to improve performance. For example, NHTSA optimized advertisements on mobile devices, since mobile-device users performed more recall searches than other users. According to NHTSA data, the awareness campaign resulted in consumers performing 1.1 million recall searches through March 2017—a cost of about $0.90 per search. Agency data indicate that this cost generally decreased as NHTSA improved the campaign’s strategy. Agency officials told us NHTSA plans to spend another approximately $1.8 million on Safe Cars Save Lives from September 2017 through September 2018 due to the campaign’s effectiveness in raising the public’s awareness about auto recalls. Pilot Program with States NHTSA began implementing a mandated 2-year pilot grant program intended to evaluate the feasibility and effectiveness of informing consumers about open auto recalls during state vehicle registration. In September 2016, NHTSA solicited applications to participate in the program, wherein selected states would inform consumers—at no charge—about open recalls using all means that permit consumers to register vehicles within the state (e.g., in person, Internet, and mail). According to NHTSA, only one state applied for the grant. In September 2017, NHTSA awarded the sole applicant $223,000. Under the program, the grantee needs to collect and report program performance data, including the extent to which open recalls have been identified and repaired. In addition, the grantee must report whether certain notification means were more effective than others and what could be done to improve the program. Upon completion of the pilot program, NHTSA is required to evaluate the extent to which open recalls identified have been remedied. Auto manufacturers we met with were generally supportive of the program. Specifically, representatives from 9 of the 10 manufacturers told us that notifying consumers about open recalls during vehicle registration can raise consumer awareness or improve recall completion rates. Proposed Rulemaking In September 2016, NHTSA issued a Notice of Proposed Rulemaking (NPRM), which proposes to require auto manufacturers to notify consumers about open recalls by electronic means—such as e-mails, phone calls, and text messages, in addition to First-Class Mail. As we described earlier, auto manufacturers are currently required to notify consumers about safety recalls affecting their vehicles via First-Class Mail. According to NHTSA, the NPRM aims to aid in efficiently and effectively improving recall completion rates, by proposing that manufacturers provide notification using electronic means in addition to First-Class Mail. Consumers in our focus groups as well as auto manufacturers and consumer associations we interviewed generally supported additional notification using electronic means. Consumers in our focus groups: As we discussed earlier, 69 of the consumers in our focus group discussion sessions reported they would prefer to receive additional notification by at least one type of electronic means. However, only 7 consumers actually received such notifications—suggesting a gap between industry notification practices and notification preferences for these consumers. Auto manufacturers: Representatives from 9 of 10 manufacturers we interviewed told us they generally support providing notification using electronic means. Although the NPRM proposes a broad definition of electronic means to give manufacturers flexibility to determine the most effective means, these representatives also shared implementation concerns. For example, representatives from 1 of the 9 manufacturers told us that—although the company collects e- mail addresses from some customers for other purposes—not all customers provide e-mail addresses, and those collected are not always accurate. As we discussed previously, most manufacturers we met with currently use supplemental electronic means notification on a case-by-case basis. Consumer associations: Similarly, both consumer associations we interviewed told us additional electronic notification can help reach consumers who do not complete repairs after receiving initial mailed notification. NHTSA’s proposal would maintain manufacturer reporting requirements, though it may result in additional reporting. This additional information could help the agency evaluate the effectiveness of various means of consumer notification. We previously found that NHTSA may be able to use manufacturers’ data to identify what factors make some recalls more or less successful than others. We recommended that NHTSA use the recall data it collects to analyze particular patterns or trends that may characterize successful recalls and determine whether these factors represent best practices. If the NPRM is finalized, manufacturers would provide NHTSA with representative copies of the newly required electronic notifications, in addition to mailed notifications, and would specify the electronic means used, such as e-mail or text message. According to NHTSA officials, this information could allow the agency to track and evaluate the effectiveness of various notification means used by manufacturers by, for example, comparing completion rates across means—a key step in identifying best practices that could encourage consumers to complete repairs. However, it is too early for NHTSA to conduct such an evaluation, because the agency has not issued a final rule. NHTSA officials told us the agency is working with the administration on NHTSA’s regulatory portfolio and priorities, including this rulemaking. Collaboration with Stakeholders NHTSA has also taken steps to collaborate with industry stakeholders and explore consumer education best practices. For example, in April 2015 NHTSA hosted a day-long workshop that brought together auto industry stakeholders to examine public education of the recall process. During the workshop, participants identified current barriers to the public’s awareness of auto recalls and discussed potential solutions to address them, such as using text messages and social media to communicate with younger consumers and using different delivery methods for recall notices. Similarly, in January 2016 NHTSA and 18 auto manufacturers adopted a set of Proactive Safety Principles to explore and employ new ways to increase safety recall participation rates. For example, NHTSA and auto manufacturers agreed to share industry best practices and policies based on lessons learned from ongoing safety recalls. The Independent Monitor of Takata in conjunction with NHTSA has also issued a set of coordinated communications recommendations based on consumer research, best practices observed during the Takata recall, and discussions with manufacturers. For example, the recommendations encourage manufacturers to: pursue a “multi-touch” communications strategy that employs non- traditional means, such as e-mail and text messages; convey risk in clear, accurate and urgent terms; and include a clear “call to action” designed to facilitate prompt and efficient scheduling of repairs. According to NHTSA officials, the agency relies on auto manufacturers to evaluate the effectiveness of these efforts. However, agency officials told us NHTSA reviews manufacturers communication plans as part of the Takata recall’s Coordinated Remedy Program and provides ongoing recommendations on manufacturers’ communication language, approach, and strategies. Conclusions With the recent steep increase in safety defect vehicle recalls and continued low recall completion rates, it is vital for consumers to be able to easily access and use publicly available auto recall information. NHTSA has taken important steps to improve its website—which provides safety recall information to consumers—resulting in most consumers in our focus groups finding the website easy to use. However, the difficulties some experienced in attempting to complete essential auto recall tasks demonstrated that NHTSA.gov does not always reflect key website usability practices for website design. Although NHTSA plans to conduct a website usability study with consumers after consolidating its websites, it has not determined a completion date for this effort—an essential step for organizations to effectively guide their information technology efforts. Without such a date, the website usability difficulties may persist and limit the effectiveness of NHTSA.gov in providing consumers with recall information about their vehicles. By addressing these difficulties in the interim, NHTSA can better assure that consumers obtain this information, which can be vital to their safety. Recommendations for Executive Action We are making the following two recommendations to NHTSA: The Administrator of NHTSA should determine a completion date for the agency’s website consolidation effort. (Recommendation 1) The Administrator of NHTSA should, while the agency continues its website consolidation effort, take interim steps to improve the usability of the auto recall areas of NHTSA.gov by addressing the website usability difficulties we identified. (Recommendation 2) Agency Comments We provided a draft of this report to DOT for review and comment. In its written comments, reproduced in appendix VI, DOT stated that it concurred with our recommendations. The department also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to relevant congressional committees, the Secretary of Transportation, and the Administrator of NHTSA. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or flemings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. Appendix I: Objectives, Scope, and Methodology This report examines the use of publicly available auto recall information for safety defects affecting passenger vehicles. The report addresses the following objectives: (1) How do consumers and industry stakeholders use publicly available auto recall information? (2) How easy or difficult to use do consumers find the auto recall areas of NHTSA.gov? (3) What steps, if any, has the National Highway Traffic Safety Administration (NHTSA) taken to raise consumer awareness about auto recalls and how has NHTSA evaluated the effectiveness of these steps? We define publicly available auto recall information to include information on the auto recall areas of NHTSA.gov, such as examples of notification letters that manufacturers mail to consumers. This report focuses on safety defect vehicle recalls affecting passenger vehicles that are initiated when a defect in a vehicle or vehicle equipment creates an unreasonable safety risk, as determined by NHTSA or a manufacturer. To determine how consumers use publicly available auto recall information, we conducted and analyzed transcripts and questionnaires from 12 consumer focus groups we conducted with used and new vehicle owners who had experienced an auto recall in the last 24 months. Each focus group was split into two sessions: (1) a discussion session to explore participants’ thoughts, experiences, and preferences about auto recall information and (2) a website usability testing session. Also, we administered a questionnaire as part of each of these sessions. For the discussion session, we asked consumers about the recall notification process and how they used the recall information, and for the website usability testing session, we asked consumers to fill in a questionnaire during the session itself as they assessed the usability of the auto recall areas of NHTSA’s website. We conducted the 12 focus groups at six locations across the country, with each group including 7 or 8 consumers for a total of 94 participants. Half of the focus groups were comprised of consumers who had completed the repair and the remaining half included consumers who had not completed the repair. We selected the six focus group locations to provide population and geographic dispersion. To ensure geographic dispersion, we selected at least one location in each U.S. Census region (see table 3). To ensure population dispersion, we selected Metropolitan Statistical Areas representing a range of population sizes based on 2015 U.S. Census estimates. To ensure our selection included the perspectives of vehicle owners in geographically distant or isolated communities, we also selected a rural location, which we defined as a city or town that has a population of less than 50,000 inhabitants and is not an urbanized area contiguous and adjacent to a city or town that has a population of greater than 50,000 inhabitants. Using information provided by the participants, we selected focus group participants based on age, income, gender, education level, race, and ethnicity to ensure we collected a range of perspectives on auto recall information use. However, since we did not select a representative sample of participants, focus group results are not generalizable to all vehicle owners. During focus group discussion sessions, we asked participants to discuss factors they considered when deciding whether to repair their recalled vehicle and then to select the single most influential factor. Each of the 12 focus group sessions was audio recorded and transcriptions were created; transcripts served as the record for each group. We then evaluated those transcripts using systematic content analysis to identify the factors consumers considered when deciding whether to complete repairs and any suggested improvements to the auto recall communication process. The analysis was conducted in three steps. First, two analysts independently developed a code framework and then worked together to resolve any discrepancies. Second, each transcript was coded independently by analysts using the framework and any discrepancies were resolved by both analysts agreeing on the coding of the associated statement by a participant. Third, if needed, another analyst adjudicated any continued disagreement between coders. Because the transcripts did not include a unique identifier for each focus group participant, we conducted our analysis of focus group session discussions at the group level (i.e., of the 12 focus groups we conducted). We also administered and analyzed a questionnaire as part of each discussion session to quantify responses regarding consumers’ use of auto recall information, including how they received and preferred to receive auto recall notifications. Our analysis of the questionnaire responses was conducted at the individual consumer level (i.e., of the 94 consumers who participated). These focus group sessions were structured, guided by a moderator who used a standardized list of questions to encourage participants to share their thoughts, experiences, and preferences. We also conducted two pretest focus groups at our headquarters and made some revisions to the focus group guide prior to beginning the sessions with consumers. Methodologically, focus groups are not designed to demonstrate the extent of a problem or to generalize results to a larger population or provide statistically representative samples or reliable quantitative estimates. Instead, they are intended to generate in-depth information about the reasons for the focus group participants’ thoughts, experiences, and preferences on specific topics. The projectability of the information produced by our focus group sessions is limited. For example, the information includes only the responses from the vehicle owners from the 12 selected groups and their individual responses to questions we asked. The experiences and preferences expressed may not reflect other vehicle owners’ thoughts and preferences. In addition, while the composition of the groups was designed to ensure a range of age and education levels, among the other criteria mentioned previously, the groups were not constructed using a random sampling method. To determine how industry stakeholders use auto recall information, we interviewed selected auto manufacturers, selected franchised and independent auto dealerships, NHTSA program officials, and other industry stakeholders. Specifically, we interviewed representatives from the following 10 auto manufacturers, selected based on each manufacturer’s sales market share (small, medium, and large), place of ownership (foreign and domestic), and experience with auto recalls (lower to higher based on the average annual number of auto recall campaigns and average market share of each manufacturer from 2010 to 2014) to collect a range of perspectives on how manufacturers use auto recall information: Tesla Motors, Inc. To understand the perspective of auto dealers, we interviewed four franchised dealerships, one in each of the four U.S. Census regions where we conducted focus groups with consumers. We also interviewed three independent auto dealerships in two U.S. Census regions. The results of these interviews are not generalizable to all auto manufacturers and dealerships, but provide insights about how some industry stakeholders use auto recall information. We conducted interviews with NHTSA program officials to understand NHTSA’s role in the auto recall process. In addition, we interviewed other stakeholders, including the Independent Monitor of Takata, which assists NHTSA in overseeing the Takata recall, as well as officials from consumer associations and other industry groups (see table 4). To evaluate how easy or difficult consumers find the auto recall areas of NHTSA.gov to use, we reviewed various website usability resources to understand federal and industry key website usability practices for making websites easy to use, such as focusing on design and how easily users can find information. In addition, we reviewed federal standards for internal control related to communicating quality information externally. During our usability testing sessions, we asked consumers to attempt to complete auto recall tasks—the primary means NHTSA.gov provides for consumers to access information about auto recalls affecting their vehicles—and discuss their experiences. We then compared consumers’ experiences with the usability of the website against these practices. To identify key website usability practices, we analyzed guidance documents from NHTSA and other federal agencies. For example, we analyzed the General Services Administration’s (GSA) and the Department of Health and Human Services’ Research-Based Web Design & Usability Guidelines, which includes quantified, peer-reviewed guidelines intended to help federal agencies improve the design and usability of their information-based websites. We also analyzed GSA’s Requirements for Federal Websites and Digital Services, and the U.S. Digital Services Playbook to identify key practices for making websites easy to use. Identified key practices are: (1) design and content— focusing on the layout, headers, and design; (2) navigation—how easily users can find information; (3) clarity—the ability to read and digest content; (4) identity and purpose—whether the site clearly presents its purpose; and (5) accessibility—the ability of people with physical or mental disabilities to use the site. To analyze the results of focus group website testing sessions, we performed a systematic content analysis of the session transcripts using the same content analysis methods described above and an analysis of the questionnaire we administered to each participant during the website usability sessions. Specifically, we analyzed the transcripts from the website usability testing sessions to account for consumers’ experiences, including their initial impressions of the website and any suggested usability improvements. We also analyzed the results of the questionnaire that each participant completed where participants were asked to mark responses regarding their experience including an assessment of the usability of the auto recall areas of NHTSA.gov. Our analysis of the results from the questionnaire responses was conducted at the individual consumer level (i.e., of the 94 consumers who participated) while our analysis of focus group session discussions was conducted at the group level (i.e., of the 12 focus groups we conducted). To corroborate the results of usability testing sessions we conducted with the consumers in our focus groups, we requested that five website usability professionals from GSA’s Federal User Experience Community conduct an independent evaluation of the auto recall areas of NHTSA.gov against federal and industry key website usability practices (described above). The website usability professionals developed a website usability evaluation form, which they used to individually evaluate the auto recall areas of NHTSA’s website. The website usability professionals then met to form a consensus and provided us with one final group evaluation of the website usability of the auto recall areas of NHTSA.gov. Also, although neither our usability testing nor the website usability evaluation conducted by website usability professionals directly addressed accessibility, we interviewed responsible agency officials about how the agency assesses the accessibility of NHTSA.gov. We also requested and analyzed website data provided by NHTSA to understand how consumers access and use NHTSA.gov. Requested data included the number of subscribers to NHTSA’s Recall Notification E-mail System Sign-up; the number of weekly vehicle identification number (VIN) searches performed on NHTSA.gov from August 2014 through May 2017; and NHTSA.gov usage data by device (i.e., usage by mobile devices, tablets, and desktop computers). We assessed the reliability of these data by reviewing any supporting documents provided by the agency and interviewing responsible NHTSA officials, and concluded the data were sufficiently reliable for our reporting purposes. While we did not independently review the usability of auto manufacturers’ auto recall websites, we requested and reviewed the results of any audits that NHTSA performed of these websites, including whether the websites met statutory and regulatory requirements for providing auto recall information to the public. We then corroborated any audit findings by reviewing the auto recall websites of the selected auto manufacturers that we interviewed. To determine any steps NHTSA has taken to raise consumer awareness about auto recalls and how NHTSA evaluates the effectiveness of any steps, we reviewed relevant statutes, regulations, and proposed rules, including the Fixing America’s Surface Transportation Act and a Notice of Proposed Rulemaking related to recall notification methods. We also reviewed agency and other documents that describe or evaluate NHTSA’s public awareness activities. For example, we analyzed NHTSA’s strategic planning documents—such as NHTSA’s Strategic Plan 2016–2020—to identify ongoing public awareness activities along with their related goals, objectives, or performance metrics. Similarly, we requested and analyzed any documents NHTSA uses to evaluate the effectiveness of its public awareness activities, including performance reports for NHTSA’s ongoing Safe Cars Save Lives campaign. To assess the reliability of data included in these performance reports— such as VIN searches performed—we reviewed agency documentation and interviewed agency officials about the reliability, accuracy, and completeness of the data and determined the data were sufficiently reliable for our reporting purposes. We reviewed performance management practices as provided in the Government Performance and Results Act of 1993 (GPRA), the GPRA Modernization Act of 2010, and standards for internal control in the federal government to identify any opportunities for improvement. We also performed a literature review to identify any related published articles and research studies. To understand how NHTSA implements and evaluates any public awareness activities, we also interviewed responsible agency officials from NHTSA’s Office of Communications and Consumer Information and other offices. In addition, we discussed NHTSA’s public awareness efforts during interviews with industry stakeholders, including selected auto manufacturers, selected franchised and independent auto dealerships, and other industry stakeholders. We analyzed the results of these interviews along with the focus group discussions we conducted with consumers (discussed above) to identify perspectives on the effectiveness of NHTSA’s public awareness steps. We conducted this performance audit from October 2016 to December 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Example of an Auto Manufacturer’s Recall Notification Letter Appendix III: Annual Recall Completion Rates by Vehicle Component and Vehicle Type The National Highway Traffic Safety Administration (NHTSA) is required to conduct a biennial analysis of vehicle safety recall completion rates and submit the results of its analysis in a report to certain congressional committees. The report must include, among other things, the annual recall completion rate by vehicle type and vehicle component (such as brakes, fuel systems, and air bags) for each of the 5 years preceding the year the report is submitted. According to NHTSA’s May 2017 report, completion rates for all vehicles combined ranged between 63 percent and 67 percent between calendar year 2011 and calendar year 2014 (see table 5). However, NHTSA reported wider variation when the recall completion rates are broken down by vehicle type. Similarly, the report found that completion rates for most component categories fall within a range of 60 percent to 75 percent (see table 6). The annual completion rate is a volume-based, weighted metric, such that the more vehicles affected by the recall, the more weight or influence it has on the computed rate. Appendix IV: Focus Group Participants’ Responses to Recall Notification Questionnaire Focus group participants responded to a questionnaire we administered to collect information on consumers’ auto recall notification preferences during our discussion sessions. Table 7 shows participants’ responses to the administered questionnaire, by age group. We present these responses by age group, because consumers’ notification preferences may vary according to their ages. Appendix V: Focus Group Participants’ Responses to Website Usability Questionnaire Focus group participants responded to a questionnaire we administered to collect information on the usability of NHTSA.gov during our usability testing sessions. Table 8 shows focus group participants’ responses to the administered questionnaire, by age group. We present these responses by age group, because consumers’ website usability needs or preferences may vary according to their ages. Appendix VI: Comments from the Department of Transportation Appendix VII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, H. Brandon Haller (Assistant Director); Katherine Blair; Jason Blake; Melissa Bodeau; Alicia Cackley; William Colwell (Analyst in Charge); Lacey Coppage; Elizabeth Dretsch; Jaci Evans; Marcia Fernandez; Sarah Kaczmarek; Malika Rice; Todd Schartung; and Andrew Stavisky made key contributions to this report.
Why GAO Did This Study The number of vehicles affected by safety defect recalls increased sharply in recent years—from nearly 13 million in 2011 to over 51 million in 2016. Once a defect is identified, auto manufacturers are required to send written notification to vehicle owners by mail. NHTSA also aims to enhance awareness of auto recalls by providing information on its website, NHTSA.gov . The Fixing America's Surface Transportation Act includes a provision requiring GAO to study the use of publicly available safety recall information. This report addresses: (1) how consumers and industry stakeholders use such information and (2) how easy to use do consumers find the auto recall areas of NHTSA.gov, among other objectives. To understand consumers' use of auto recall information and to test website usability, GAO conducted 12 focus groups with 94 consumers who had a recall. Focus groups were held in six locations selected for population and geographic variation. GAO identified key website usability practices and requested an evaluation by website usability professionals. GAO reviewed statutes, regulations, and NHTSA documents, and interviewed industry stakeholders—including 10 manufacturers selected based on sales market share and other factors. What GAO Found Consumers, manufacturers, and auto dealers use publicly available auto recall information differently. For example, the 94 consumers in 12 focus groups that GAO conducted used this information to decide whether to repair their vehicles. These consumers overwhelmingly cited safety risk and convenience as the two most influential factors they considered. Most consumers reported a preference for receiving recall notification by at least one electronic means, such as by e-mail or text message, in addition to mail. However, only 7 of 94 consumers reported receiving electronic notifications, suggesting a gap between the industry's auto recall notification practices and consumers' preferences. (See fig.). In response to a mandate in law, in September 2016, the National Highway Traffic Safety Administration (NHTSA) issued a proposed rule that, if finalized, would require manufacturers to notify consumers about auto recalls by electronic means in addition to mail. Most consumers in GAO's focus group website usability tests found the auto recall areas of NHTSA's website—NHTSA.gov—easy to use; however, some consumers experienced difficulties when asked to complete auto recall related tasks. For example, when consumers attempted to search for recalls affecting their specific vehicles, some found the search results confusing, leading them to question the accuracy of the results. Similarly, some consumers were hampered in searching for recalls by their vehicles' year, make, and model because the website did not always display model options using plain language. GAO found that the auto recall areas of NHTSA.gov do not always reflect federal and industry key website usability practices, and that an independent evaluation conducted by website usability professionals at GAO's request identified similar issues. NHTSA is in the process of consolidating its websites and plans to conduct a website usability study of NHTSA.gov with consumers after the consolidation is complete. However, the agency has not determined a completion date for the consolidation effort—an essential step for organizations to effectively guide their information technology efforts. Without establishing a completion date and taking interim steps to improve the usability of NHTSA.gov, consumers will likely continue to experience difficulties, which may limit the effectiveness of the website's primary means of providing consumers with information about recalls affecting their vehicles. What GAO Recommends GAO recommends that NHTSA determine a completion date for its website consolidation effort and take interim steps to improve the usability of NHTSA.gov by addressing the website usability difficulties GAO identified. The Department of Transportation concurred with the recommendations.
gao_GAO-18-305
gao_GAO-18-305_0
Background Coal accounted for 17 percent of energy production (30 percent of electricity production) in the United States in 2016. To generate this energy, approximately 730 million tons of coal were mined domestically in 2016, according to the U.S. Energy Information Administration, approximately 40 percent of which was produced on federal lands. As of 2016, state regulatory authorities and OSMRE had received financial assurances associated with coal mines that had been permitted to disturb approximately 2.3 million acres, according to OSMRE data. Coal is mined in two different ways: surface mining and underground mining. In surface coal mining, before the underlying coal can be extracted, the land is cleared of forests and other vegetation and topsoil is removed and stored for later use. Explosives or other techniques are then used to break up the overlying solid rock, creating dislodged earth, rock, and other materials known as spoil. Surface coal mines can cover an area of many square miles. In underground coal mining, tunnels are dug to access coal that is too deep for surface mining methods. In some cases, underground coal mines are designed to leave sufficient coal in the mine to support the overlying surface, and in other cases, they are designed to extract higher quantities of coal that results in subsidence of the overlying surface as mining progresses. In addition to disturbing the land surface, coal mining can affect water quality, according to the Environmental Protection Agency, the National Academies, and others. For example, mining can increase sediments in rivers or streams, which may negatively affect aquatic species. Moreover, mining can expose minerals and heavy metals to air and water, leading to a condition known as acid mine drainage, which can lead to long-term water pollution and harm some fish and wildlife species. Mining can also lower the water table or change surface drainage patterns. Regulation of Coal Mining The surface effects of coal mining in the United States are regulated under SMCRA, which also created OSMRE to administer the act. SMCRA allows an individual state or Indian tribe to develop its own program to implement the act if the Secretary of the Interior finds that the program is in accordance with federal law. A state with an approved program is said to have “primacy” for that program. To obtain primacy, a state or Indian tribe submits to the Secretary of the Interior for approval a program that demonstrates that the state or tribe has the capability of carrying out the requirements of SMCRA. The program must demonstrate that the state or Indian tribe has, among other things, a law that provides for the regulation of the surface effects of coal mining and reclamation in accordance with the requirements of SMCRA, and a regulatory authority with sufficient personnel and funding to do so. Of the 25 states and four Indian tribes that OSMRE identified as having active coal mining in 2017, 23 states had primacy, and OSMRE manages the coal program in 2 states and for the four Indian tribes. SMCRA requires a mine operator to obtain a permit before starting to mine. The permit process requires operators to submit plans describing the extent of proposed mining operations and how and on what timeline the mine sites will be reclaimed. In general, an operator must reclaim the land to a use it was capable of supporting before mining or to an alternative postmining land use that OSMRE or the state regulatory authority deems higher or better than the premining land use. In reclaiming the mine site, operators must comply with regulatory standards that govern, among other things, how the reclaimed area is regraded, replanting of the site, and the quality of water flowing from the site. Specifically: Operators are generally required to return mine sites to their approximate original contour unless the operator receives a variance from the regulatory authority. To return to this contour, the surface configuration achieved by backfilling and grading of the mined area must closely resemble the general surface configuration of the land before mining and blend into and complement the drainage pattern of the surrounding terrain, with all highwalls and spoil piles eliminated. Operators are required to demonstrate successful revegetation of the mine site for 5 years (in locations that receive more than 26 inches of rain annually) or 10 years (in drier areas). States have requirements for what vegetation may be planted depending on the approved postmining land use. For example, West Virginia’s regulations call for sites with a postmining land use of forest land to be planted with at least 500 woody plants per acre. The state specifies that at least five species of trees be used, including at least three of the species being higher value hardwoods, such as oak, ash, or maple. SMCRA requires that financial assurances be sufficient to ensure reclamation compliant with water quality standards, including those established by the Environmental Protection Agency or the states under the Clean Water Act. SMCRA’s implementing regulations also contain additional water protection requirements. For example, the regulations require that all surface mining and reclamation activities be conducted to minimize disturbance of the hydrologic balance within the permit and adjacent areas and to prevent material damage to the hydrologic balance outside the permit area. The federal government also enacted SMCRA, in part, to implement an abandoned mine land program to promote the reclamation of mined areas left without adequate reclamation prior to 1977, when SMCRA was enacted, and that continue to substantially degrade the quality of the environment, prevent or damage the beneficial use of land or water resources, or endanger the health or safety of the public. Specifically, Congress found that a substantial number of acres of land throughout the United States had been disturbed by surface and underground coal mining on which little or no reclamation was conducted. Further, it found that the impacts from these unreclaimed lands imposed social and economic costs on residents in nearby areas as well as impaired environmental quality. Since the abandoned mine land program was created, approximately $3.9 billion has been spent to reclaim abandoned mine lands, and there is at least $10.2 billion in remaining reclamation costs for coal mines abandoned prior to 1977, as of September 30, 2017, according to OSMRE. Financial Assurances for Reclamation SMCRA generally requires operators to submit a financial assurance in an amount sufficient to ensure that adequate funds will be available for OSMRE or the state regulatory authority to complete the reclamation if the operator does not do so. The amount of financial assurance required is determined by the regulatory authority—OSMRE or the state—and is based on its calculation of the estimated cost to complete the reclamation plan it approved as part of the mining permit. Financial assurance amounts can be adjusted as the size of the permit area or the projected cost of reclamation changes. SMCRA also authorizes states to enact an OSMRE-approved alternative bonding system as long as the alternative achieves the same objectives. One kind of alternative bonding system is known as a bond pool. Under this type of system, the operator may post a financial assurance for an amount determined by multiplying the number of acres in the permit area by a per-acre assessment. The per-acre assessment may vary depending on the site-specific characteristics of the planned mining operation and the operator’s history of compliance with state regulations. However, the per-acre bond amount may be less than the estimated cost of reclamation. To supplement the per-acre bond, the operator generally must pay a fee for each ton of mined coal and may also be required to pay other types of fees. These funds are pooled and can be used to reclaim sites that participants in the alternative bonding system do not reclaim. Under OSMRE regulations, all alternative bonding systems must provide a substantial economic incentive for the operator to comply with reclamation requirements and must ensure that the regulatory authority has adequate resources to complete the reclamation plan for any sites that may be in default at any time. OSMRE regulations implementing SMCRA recognize three major types of financial assurances: surety bonds, collateral bonds, and self-bonds. A surety bond is a bond in which the operator pays a surety company to guarantee the operator’s obligation to reclaim the mine site. If the operator does not reclaim the site, the surety company must pay the bond amount to the regulatory authority, or the regulatory authority may allow the surety company to perform the reclamation instead of paying the bond amount. Collateral bonds include cash; certificates of deposit; liens on real estate; letters of credit; federal, state, or municipal bonds; and investment-grade rated securities deposited directly with the regulatory authority. A self-bond is a bond in which the operator promises to pay reclamation costs itself. Self-bonds are available only to operators with a history of financial solvency and continuous operation. To remain qualified for self-bonding, operators must, among other requirements, do one of the following: have an “A” or higher bond rating, maintain a net worth of at least $10 million, or possess fixed assets in the United States of at least $20 million. In addition, the total amount of self-bonds any single operator can provide shall not exceed 25 percent of its tangible net worth in the United States. Primacy states have the discretion on whether to accept self-bonds. State Regulatory Authorities and OSMRE Reported Holding $10.2 Billion in Various Types of Financial Assurances State regulatory authorities and OSMRE reported holding a total of approximately $10.2 billion in surety bonds, collateral bonds, and self- bonds as financial assurances for coal mine reclamation in 2017. Of the total amount of financial assurances, approximately 76 percent ($7.8 billion) were in the form of surety bonds, 12 percent ($1.2 billion) in collateral bonds, and 12 percent ($1.2 billion) in self-bonds (see fig. 1). Twenty-four states reported holding surety bonds, 20 states reported holding collateral bonds, and 8 states reported holding self-bonds (see table 1). In addition, OSMRE officials identified 6 states—Indiana, Kentucky, Maryland, Ohio, Virginia, and West Virginia—that have also established alternative bonding systems, such as bond pools. In a state with a bond pool, the operator may generally post a financial assurance for less than the full estimated cost of reclamation; in addition, the operator must pay into a bond pool. The pooled funds can be used to supplement forfeited financial assurances to reclaim sites that operators participating in the bond pool do not reclaim. About Half of the States Reported at Least One Forfeited Financial Assurance States and OSMRE reported that operators forfeited more than 450 financial assurances for reclaiming coal mines between July 2007 and June 2016, with 13 of the 25 states reporting at least one forfeiture. States and OSMRE reported that the amount of financial assurance forfeited was sufficient to cover the cost of required reclamation in about 52 percent of the cases and did not cover the cost of required reclamation in about 22 percent of the cases. In the remainder of the cases (26 percent), the state or OSMRE reported that it had not yet determined if the financial assurance amount covered the reclamation costs that it was intended to cover. State and OSMRE officials said that it can take many years to fully reclaim a site and that it may take time for them to identify the extent of reclamation needed and to determine if the amount of financial assurance forfeited was sufficient to cover reclamation costs. State and OSMRE officials said there were several reasons why the amount of financial assurance obtained might not be sufficient to cover reclamation costs. For example, officials said the amount of financial assurance might not be sufficient if an operator mined in a manner inconsistent with the approved mining plan upon which the amount of financial assurance was calculated or if mining activity resulted in water pollution that was not considered when the amount of financial assurance was calculated. In cases where the amount of financial assurance does not cover the cost of reclamation, the operator remains responsible for reclaiming the mine site. However, OSMRE officials said that in those cases where the operator may be experiencing financial difficulties, it might be difficult for the states or OSMRE to compel the operator to complete the reclamation or provide additional funds to do so without having the operator go out of business or into bankruptcy. If the operator does not reclaim the site, the regulatory authority must use the forfeited financial assurance to do so. If the forfeited funds are not adequate, the site may not be fully reclaimed unless the regulatory authority either successfully sues the operator for more funds or provides any additional funds needed for reclamation. One other source of funds states can use to reclaim forfeited mines is civil penalties that the United States government collects from operators that violate conditions of their mining permits. OSMRE obligated approximately $2.8 million in civil penalties from fiscal years 2012 through 2017 for states to use to perform reclamation in cases where the financial assurance was not sufficient, according to agency officials. OSMRE Has Taken a Variety of Steps Related to Oversight of Financial Assurances OSMRE has taken steps—including periodically reviewing financial assurance amounts, inspecting mine sites, and reviewing state programs that implement SMCRA—to oversee financial assurances and aspects of the mining and reclamation process that can affect whether the amount of financial assurances obtained will cover the cost of required reclamation. OSMRE and State Regulators Periodically Review Financial Assurance Amounts SMCRA requires OSMRE or the primacy state regulatory authority to calculate the amount of financial assurance required for each mine and to adjust the amount when the area requiring bond coverage increases or decreases or when the cost of future reclamation changes. OSMRE officials and state regulatory authority officials from four of the six states we interviewed said they generally review the amount of financial assurance at least every 2 1/2 years or when the mining plan has been modified in a way that may affect the amount of financial assurance required. Such periodic reviews are in part to help ensure that OSMRE and state regulatory authorities continue to hold an amount sufficient to complete required reclamation as conditions change. These reviews can lead to OSMRE or the state regulatory authority changing the amount of financial assurance required for a mine. For example: A state regulatory authority official in Utah said that the regulatory authority reviewed an existing mine permit in 2014, which led to it recalculating the estimated cost of reclamation on the basis of current costs. The state regulatory authority requested that the operator provide a financial assurance to cover the difference (approximately $195,000), in addition to the $445,000 financial assurance already in place. However, the official said that the operator—which had stopped mining the site in 2012 and filed for bankruptcy in 2013—did not provide the additional financial assurance amount. As a result, in 2017 the state regulatory authority collected the financial assurance that was in place (i.e., the operator forfeited its assurance). The official said in December 2017 that the state regulatory authority is determining the steps it will take to reclaim the site and expects that the forfeited amount will be sufficient to cover reclamation costs. OSMRE officials said that the agency reviewed a permit for a mine on Navajo tribal lands and determined that it needed to ask the operator to provide an additional financial assurance in the amount of $5.7 million. The increase was due to inflation and to include certain costs, such as the cost of mobilizing equipment needed for reclamation, that had inadvertently been excluded from the earlier calculation of the financial assurance required. The officials said that the operator provided the additional financial assurance amount. State regulatory authority officials in Wyoming said they review financial assurance amounts annually, and in 2017 they reduced the financial assurance for one mine by almost $35 million because of a substantial decline in fuel costs and the mine’s ability to share the cost of needed reclamation equipment with a neighboring mine. OSMRE Inspects Mine Sites SMCRA requires OSMRE to make an average of at least one complete inspection per calendar quarter and one partial inspection per month for each active permit for which it is the regulatory authority to ensure that mines are in compliance with SMCRA and federal regulations. Complete inspections cover all inspection elements in OSMRE’s directive, while partial inspections may instead focus on issues that most frequently result in violations or a specific topic identified for oversight, according to OSMRE officials. In addition, OSMRE’s directive instructs the agency to inspect a sample of mines annually in states that have primacy to monitor and evaluate approved state programs’ compliance with SMCRA. The total number of inspections OSMRE is directed to conduct in primacy states is based on the number of inspectable units in each state. Complete inspections are to be done on 33 percent of those sites selected for inspection. Overall, OSMRE completed more inspections in primacy states than directed each year for evaluation years 2013 through 2016, according to agency data. For example, in evaluation year 2016, OSMRE’s directive called for it to conduct 1,225 inspections and OSMRE completed 1,388. As part of a complete inspection, OSMRE confirms that the operator is following the mining and reclamation plans to assure that the amount of financial assurance in place is adequate, according to OSMRE officials. If a violation is identified during an inspection, SMCRA requires OSMRE to issue a ten-day notice to the state regulatory authority or an immediate cessation order to the operator. If the violation increases the estimated cost of reclamation (e.g., if the operator disturbed more land than it was approved for) or an adequate financial assurance had not been collected, OSMRE or the state regulatory authority can request that the operator provide an additional financial assurance. For example: OSMRE issued a ten-day notice to the Pennsylvania regulatory authority in 2015 because a water treatment system for a mine in that state did not have a financial assurance. According to OSMRE officials, the state regulatory authority took appropriate action to resolve the situation by issuing an order for the operator to post a financial assurance within 7 days. During an inspection of a mine in Tennessee, a nonprimacy state, OSMRE determined that the operator had not correctly reclaimed a portion of the mine because the slope of the regraded area was too steep, according to an OSMRE official. For the reclamation work that would be needed to regrade that area, OSMRE determined that the operator needed to provide an additional financial assurance of $272,000. OSMRE Reviews State Coal Programs Under SMCRA, OSMRE is required to evaluate each primacy state’s coal program annually to ensure that it complies with SMCRA. SMCRA includes a requirement that the regulatory authority secure necessary financial assurances to assure the reclamation of each permitted mine site. While OSMRE’s directive on oversight of state and tribal regulatory programs does not instruct the agency to review state regulatory authority calculations of financial assurance amounts, it instructs OSMRE to focus on the state programs’ success in achieving the overall purposes of SMCRA. For example, OSMRE, in conducting its oversight, is to evaluate the states’ effectiveness in successfully reclaiming lands affected by mining and in avoiding negative effects outside of areas authorized for mining activities. If OSMRE’s review of a state program identifies an issue that could result in the state not effectively implementing, administering, enforcing, or maintaining all or any portion of its approved coal program, OSMRE can work with the state regulatory authority to develop an action plan to correct the issue. If a state regulatory authority does not take the necessary corrective action, OSMRE may begin the process of withdrawing approval for a part or all of the state’s primacy. In addition to annually evaluating state programs, OSMRE can conduct national or regional reviews on specific topics. For example, OSMRE conducted a national review in 2010 that examined how state regulatory authorities calculated the required amount of financial assurances for coal mine reclamation. The review examined financial assurance practices in 23 states and reported that on the basis of the sample of mining permits reviewed, OSMRE was unable to determine if the amount of financial assurances was adequate for at least one of the permits it reviewed in 10 of the 23 states. Among the potential issues OSMRE identified were errors in the methods state regulatory authorities used to calculate financial assurance amounts and insufficient information in the reclamation plan upon which to calculate reclamation costs. OSMRE has worked with the 10 state regulatory authorities to address the financial assurance issues identified in the 2010 review. For example, OSMRE’s review found that the regulatory authority in Pennsylvania did not secure sufficient financial assurances to complete reclamation plans, in part because amounts were not calculated based on the actual sizes of the areas excavated for mining. In August 2014, OSMRE and Pennsylvania’s regulatory authority agreed to an action plan to ensure that the financial assurances for all active and new permits would be calculated using the actual sizes of the excavated areas. According to an OSMRE official, as of February 2017, the state regulatory authority had recalculated the financial assurance amount for all mines and had secured the additional financial assurances needed from operators of all but two of the mines. State officials said in October 2017 that they were continuing to work to obtain the assurances required for the two mines. OSMRE’s 2010 review also found that financial assurances in Kentucky were not always sufficient to cover required reclamation costs, in part because the method Kentucky’s regulatory authority used to calculate financial assurance amounts did not factor in all costs, such as the cost of moving equipment to and from the reclamation site. In February 2011, OSMRE and Kentucky’s regulatory authority signed an action plan identifying steps needed to address the issues OSMRE had identified. However, in May 2012, OSMRE determined that the state regulatory authority’s proposed changes to its method for calculating financial assurance amounts was an improvement but would not result in the authority obtaining sufficient funds to cover required reclamation. As a result, OSMRE initiated the process of revoking Kentucky’s primacy for this aspect of its program. In response, Kentucky implemented regulations to increase the minimum financial assurance required. The regulations also required the state regulatory authority to evaluate financial assurance amounts every 2 years to determine whether they need to be increased, among other things. The state regulatory authority sent a set of program amendments to OSMRE designed to address the identified deficiencies, some of which OSMRE is currently reviewing. OSMRE and State Regulatory Authorities Face a Number of Challenges in Managing Financial Assurances OSMRE and state regulatory authorities face a number of challenges in managing financial assurances for coal mine reclamation—including those related to self-bonding, unanticipated reclamation costs, and the financial stability of surety companies—according to federal and selected state regulatory authority officials, representatives from organizations associated with the mining and financial assurance industries, and representatives from environmental nongovernmental organizations whom we interviewed. Regulatory Authorities Face Several Challenges Associated with Self- Bonding Challenges facing OSMRE and state regulatory authorities related to self- bonding include the following: Not knowing the complete financial health of an operator. The information federal regulations require operators to provide to regulatory authorities may provide an incomplete picture of the financial health of an operator, according to some parties we interviewed. For example, the financial information that operators provide reflects their past financial health, which may not reflect the operators’ current financial position, according to OSMRE’s response to the 2016 petition seeking revisions to its self-bonding regulations. In addition, if an operator applying for a self-bond is a subsidiary of another company, the operator is not required by regulation to submit information on the financial health of its parent company. While the operator applying may have sufficient financial assets to qualify for self-bonding, if its parent company experiences financial difficulties, the operator’s assets may be drawn on to meet the parent’s obligations, which could worsen the financial health of the self-bonded operator. In addition, according to OSMRE officials, even if OSMRE or a state regulatory authority were to become aware that an operator’s parent company was at financial risk, it would be difficult for the agency to deny the operator’s request for a self-bond because eligibility is specific to the entity applying for the self-bond, according to regulations. OSMRE could change its self-bonding regulations to require more information, according to OSMRE officials. However, the financial relationships between parent and subsidiary companies have become increasingly complex, making it difficult to ascertain an operator’s financial health on the basis of information reported in company financial and accounting documents, according to officials. When OSMRE first approved its self-bonding regulations in 1983, it noted that it was attempting to provide rules that would allow self-bonding without necessitating regulatory authorities to employ financial experts to determine which companies should be allowed to self-bond. However, according to OSMRE officials, financial expertise is now often needed to evaluate the current complex financial structures of large coal companies, which was not envisioned when the regulations were developed. Difficulty in determining whether an operator qualifies for self- bonding. The regulatory authority in a given state may not be aware that an operator had self-bonded in other states, making it difficult for the agency to determine whether the operator qualifies for self- bonding, according to some parties we interviewed. Operators are only allowed to self-bond for up to 25 percent of their net worth in the United States, according to regulations. Regulatory authority decisions on accepting self-bonds generally focus on assessing activities occurring in a specific state, not nationwide, according to the Interstate Mining Compact Commission. As a result, the state regulatory authority or OSMRE may know whether an operator has applied for self-bonds in other states that if approved would exceed 25 percent of its net worth in total. Difficulty in replacing existing self-bonds with other assurances if needed. OSMRE and state regulatory authorities may find it difficult to get operators to replace existing self-bonds with another type of financial assurance when needed, according to some parties we interviewed. If an operator no longer qualifies for self-bonding (e.g., if it has declared bankruptcy), federal regulations require it to either replace self-bonds with other types of financial assurances or stop mining and reclaim the site. In either case, however, some parties noted that such actions could lead to a worsening of the operator’s financial condition, which could make it less likely that the operator will successfully reclaim the site. Some parties we interviewed have noted that regulatory authorities may be reluctant to direct the operator to replace a self-bond with another type of financial assurance and may instead allow the operator to keep mining so that any generated revenue could help the operator reclaim the site. For example, in 2015 the Wyoming regulatory authority determined that an operator no longer qualified for self-bonding and ordered it to replace a $411 million self-bond. However, the operator entered into bankruptcy without having replaced the self-bond. In this case, the state regulatory authority determined that reclamation was more likely to occur if the operator continued mining and allowed the operator to do so without a valid financial assurance. The operator replaced its self-bond as a part of its bankruptcy settlement approximately 17 months after the state regulatory authority’s order to replace the self-bond, according to OSMRE officials. However, if a self-bonded operator were to enter bankruptcy and did not secure a financial assurance to replace the self-bond or complete the required reclamation, the state regulatory authority would have to work through the bankruptcy proceedings to obtain funds for reclamation, according to OSMRE’s preamble to its 1983 self-bonding regulations. As a result, the state may recover only some, or possibly none, of the funds promised through the self- bond, and the cost of reclamation could fall on taxpayers. Difficulty in managing the risk associated with self-bonding. The risk associated with self-bonding is greater now than when the practice was first authorized under SMCRA, according to some parties we interviewed. According to SMCRA, the purpose of financial assurances is to ensure that regulatory authorities have sufficient funds to complete required reclamation if the operator does not do so. While SMCRA allows self-bonding in certain circumstances, when OSMRE first approved its self-bonding regulations, the agency did so noting that at the time there were companies financially sound enough that the probability of bankruptcy was small. Furthermore, the regulations stated that the intent was to avoid, to the extent reasonably possible, the acceptance of a self-bond from a company that would enter bankruptcy. However, as previously mentioned, three of the largest coal companies in the United States declared bankruptcy in 2015 and 2016, and these companies held approximately $2 billion in self-bonds at the time, according to an OSMRE August 2016 policy advisory, making it a very different risk landscape than originally envisioned. Following these bankruptcies—and recognizing that the coal industry was likely to continue to face economic challenges for several more years— OSMRE initiated steps in 2016 to reexamine the role of self-bonding for coal mine reclamation. Specifically, as previously mentioned, OSMRE issued a policy advisory in August 2016 noting that given these circumstances, state regulatory authorities should exercise their discretion under SMCRA and not accept new or additional self-bonds for any permit until coal production and consumption market conditions reach equilibrium. OSMRE has reported that it is not likely for that to occur until at least 2021. OSMRE also announced in September 2016 that the agency planned to examine changes to its bonding regulations that would, among other things, help ensure that reclamation is completed if a self-bonded operator does not do so. However, following a review of department actions that could affect domestic energy production, Interior announced in October 2017 that it was reconsidering the need for and scope of potential changes to its bonding regulations. OSMRE officials said that they did not have a timeline for finalizing a decision on potential changes in its bonding regulations. In addition, OSMRE rescinded its August 2016 policy advisory that states take steps to assess whether operators currently using self-bonds can still quality to do so and that states not accept any new self-bonds. Similar issues involving bankruptcies of hardrock mining operators led the Bureau of Land Management to implement regulations in 2001 eliminating the use of self-bonding for hardrock mining. In doing so, the Bureau of Land Management determined that a self-bond is less secure than other types of financial assurances, especially in cases where commodity prices fluctuate. The agency also noted that operators that would otherwise be eligible to self-bond should not have a significant problem obtaining another type of financial assurance. In our previous work examining other types of environmental cleanup, we found that the financial risk to the government and the amount of oversight needed for self-bonds are relatively high compared to other forms of financial assurances. Furthermore, we also previously reviewed federal financial assurance requirements for coal mining, hardrock mining, onshore oil and gas extraction, and wind and solar energy production and found that of these activities coal mining is the only one where self-bonding was allowed. Because SMCRA explicitly allows states to decide whether to accept self-bonds, eliminating the risk that self-bonding poses to the federal government and states would require that SMCRA be amended. Obtaining Additional Financial Assurances for Unanticipated Reclamation Can Be Difficult Unanticipated reclamation costs, such as those related to long-term treatment for water pollution, may arise late in a mine’s projected lifespan, and the operator may not have the financial means to cover the additional costs, according to OSMRE officials. Under SMCRA, OSMRE and state regulatory authorities are not to approve a permit for a coal mine if the regulatory authority expects the mine to result in long-term water pollution. As a result, since long-term water pollution is not anticipated to occur, the cost of addressing it would not be included in the initial financial assurance that the operator provides. If the regulatory authority later determines that long-term water treatment is needed, the regulatory authority must adjust the amount of financial assurance that the operator is required to provide. Some parties we interviewed have also noted that the costs and duration of long-term water treatment are not well defined and that surety bonds are not well-suited to provide assurance for such indefinite long-term costs. For example, according to the Interstate Mining Compact Commission, surety bonds are designed for shorter-term, defined obligations that have a high certainty for bond release following the completion of reclamation. To help address this challenge, some states have established, or allowed operators to establish, trust funds to help cover such unanticipated reclamation costs. For example, West Virginia established a fund, primarily supported through a tax on the amount of coal mined, to operate water treatment systems on forfeited sites. West Virginia’s regulatory authority is also working to evaluate permits for sites with water pollution to estimate water treatment costs within the state more precisely. Similarly, Pennsylvania allows operators to establish trust funds that are maintained by foundations and monitored by the state regulatory authority and are intended to ensure that there are sufficient funds to cover the costs of long-term water treatment, according to state regulatory authority officials. In addition, the OSMRE-run coal program in Tennessee allows trust funds for water treatment, in part because an assurance system that provides an income stream may be better suited to ensuring the treatment of long-term water pollution than conventional financial assurances, according to an OSMRE notice in the Federal Register. Determining the Financial Stability of Surety Companies Has Been Challenging in Certain Instances The utility of surety bonds in providing a financial assurance depends on the surety company’s ability to pay the amount pledged if the operator forfeits. OSMRE regulations require that a surety company be licensed to do business in the state where a mine is located. Some parties we interviewed noted that surety companies have declared bankruptcy or experienced financial difficulties in the past and could experience similar difficulties in the future. In addition, two states reported recent issues related to surety companies. For example, state regulatory authority officials in Alabama said that a surety company that had provided surety bonds totaling $760,000 for four mines in that state had gone bankrupt or was insolvent. As of May 2017, the state had collected only $127,000. Similarly, state regulatory authority officials in Alaska said that as of August 2017, the state had not collected any part of a forfeited $150,000 surety bond because the surety company had gone bankrupt. In our previous work examining other types of environmental cleanup, we have found that the financial risk to the government and the amount of oversight needed for surety bonds are relatively low to moderate compared to other forms of financial assurances. Conclusions Billions have been spent to reclaim mines abandoned prior to the financial assurance requirements SMCRA put in place, and billions more remain. Under SMCRA, self-bonding is allowed for coal mine operators with a history of financial solvency and continuous operation—the only type of energy production or mineral extraction activity we have reviewed for which this is allowed. Bankruptcies of coal mine operators in 2015 and 2016 have highlighted risks that OSMRE and state regulatory authorities face in managing self-bonding—a risk that may be greater today than when self-bonding was first authorized under SMCRA. If a self-bonded operator were to enter bankruptcy and does not provide a different type of financial assurance or complete the required reclamation, the regulatory authority and the taxpayer potentially assume the risk of paying for the reclamation. Although OSMRE said it would examine changes to its self- bonding regulations following recent bankruptcies, Interior recently said that it is reconsidering the need to do so. Because SMCRA explicitly allows states to decide whether to accept self-bonds, eliminating the risk that self-bonding poses would require amending SMCRA. Until such a change is made, the government will remain potentially at financial risk for future reclamation costs resulting from coal mines with unsecured financial assurances. Matter for Congressional Consideration Congress should consider amending SMCRA to eliminate the use of self- bonding as a type of financial assurance for coal mine reclamation. (Matter for Consideration 1) Agency Comments We provided a draft of this report to the Department of the Interior for review and comment. Interior did not provide written comments on our findings and matter for congressional consideration. OSMRE provided technical comments in an e-mail, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of the Interior, the Acting Director of OSMRE, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions, please contact Anne-Marie Fennell at (202) 512-3841 or fennella@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix II. Appendix I: Characteristics of States GAO Selected for Review to Obtain Additional Information regarding OSMRE Oversight We selected a nonprobability sample of states to examine the Office of Surface Mining Reclamation and Enforcement’s (OSMRE) oversight activities in more detail. We generally selected states that produced the most coal in 2015 but also selected states in order to achieve some variation in factors such as geographic location, the dominant type of coal mining conducted (e.g., surface or underground mining), whether the state had primacy, and whether the state allowed self-bonding (see table 2). Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Elizabeth Erdmann (Assistant Director), Antoinette Capaccio, Jonathan Dent, Cynthia Grant, Marya Link, Anne Rhodes-Kline, Sheryl Stein, Guiovany Venegas, and Jack Wang made key contributions to this report.
Why GAO Did This Study Coal accounts for 17 percent of domestic energy production. SMCRA requires coal mine operators to reclaim lands that were disturbed during mining and to submit a financial assurance in an amount sufficient to ensure that adequate funds will be available to complete reclamation if the operator does not do so. Recent coal company bankruptcies have drawn attention to whether financial assurances obtained by OSMRE and state agencies will be adequate to reclaim land once coal mining operations have ceased. GAO was asked to review management of financial assurances for coal mine reclamation. This report describes, among other things, the amounts and types of financial assurances held for coal mine reclamation in 2017 and the challenges that OSMRE and state agencies face in managing these financial assurances. GAO collected and analyzed data from OSMRE and 23 state agencies; reviewed federal laws, regulations, and directives; and interviewed OSMRE and state agency officials and representatives from organizations associated with the mining and financial assurance industries and environmental organizations. What GAO Found State agencies and the Department of the Interior's Office of Surface Mining Reclamation and Enforcement (OSMRE) reported holding approximately $10.2 billion in surety bonds (guaranteed by a third party), collateral bonds (guaranteed by a tangible asset, such as a certificate of deposit), and self-bonds (guaranteed on the basis of a coal operator's own finances) as financial assurances for coal mine reclamation. OSMRE and state agencies face several challenges in managing financial assurances, according to the stakeholders GAO interviewed. Specifically, Obtaining additional financial assurances from operators for unanticipated reclamation costs, such as long-term treatment for water pollution, can be difficult. Determining the financial stability of surety companies has been challenging in certain instances. Self-bonding presents a risk to the government because it is difficult to (1) ascertain the financial health of an operator, (2) determine whether the operator qualifies for self-bonding, and (3) obtain a replacement for existing self-bonds when an operator no longer qualifies. In addition, some stakeholders said that the risk from self-bonding is greater now than when the practice was first authorized under the Surface Mining Control and Reclamation Act (SMCRA). GAO's previous work examining environmental cleanup found that the financial risk to government and the amount of oversight needed for self-bonds are relatively high compared to other forms of financial assurances. GAO also previously reviewed federal financial assurance requirements for various energy and mineral extraction sectors and found that coal mining is the only one where self-bonding was allowed. However, because SMCRA explicitly allows states to decide whether to accept self-bonds, eliminating the risk that self-bonds pose to the federal government and states would require SMCRA be amended. What GAO Recommends GAO recommends that Congress consider amending SMCRA to eliminate self-bonding. Interior neither agreed nor disagreed with GAO's recommendation.
gao_GAO-18-375
gao_GAO-18-375_0
Background Over the last decade, VHA has increasingly provided care on an outpatient basis, including primary care and mental health care services. VHA Handbook 1006.02, VHA Site Classifications and Definitions, defines classifications for outpatient sites of care including CBOCs. VHA’s Directive 1229, Planning and Operating Outpatient Sites of Care, outlines the process for establishing new CBOCs. VHA’s Outpatient Sites of Care VHA provides outpatient care through CBOCs, health care centers, and other outpatient services sites, which are defined in VHA’s site classification policy: CBOCs are clinics that provide primary care and mental health care services, and also may provide specialty care services such as cardiology or neurology, in an outpatient setting. CBOCs can provide a wide array of services, ranging from a small, mainly telehealth clinic with one technician and a nurse, to a large clinic with several specialty care services and providers. Each clinic is overseen by, and separate from, its VAMC; each VAMC in turn is overseen by one of 18 VISNs. Health care centers are large multi-specialty outpatient clinics that provide primary care, mental health care, and on-site surgical services, in addition to other health care services. Other outpatient services sites provide nonclinical services, such as social services, homelessness services, and support services. They may also provide services that are clinical in nature through telehealth or other arrangements. (See fig. 1.) VHA’s Process for Establishing New CBOCs To establish a new CBOC, VHA’s policy states that the VAMC and VISN must ensure that one is needed by first exhausting existing VHA resources (such as changing clinic hours or staffing) and determining that VHA community care programs cannot meet the identified demand. The VAMC and VISN follow several steps to assess the need for a new clinic: Step 1—The VAMC and VISN identify an underserved area using VHA models that project changes in the veteran population and trends in veterans’ health care needs. Step 2—The VAMC develops a detailed proposal for the new clinic— an Access Expansion Plan—that includes information such as whether the proposed clinic will be VHA-operated or contracted, projected workload, scope of the services to be provided, and cost. It also describes, as required by VHA policy, how the VAMC has exhausted existing VHA resources before proposing a new clinic. Step 3—The VISN reviews the expansion plan, and if approved forwards it to an interdisciplinary panel at VHA’s central office, which reviews it. A list of approved clinics is then sent to the Under Secretary for Health for endorsement. Step 4—Endorsed clinics are included in the VISN’s Strategic Capital Investment Planning process submission for the fiscal year. Final approval and funding for a new CBOC is dependent on Office of Management and Budget approval of VA’s budget submission and VHA’s final appropriations. In fiscal year 2015, VHA suspended the establishment of new CBOCs beginning in fiscal year 2018 due to several factors, including budget constraints and an emphasis on the use of VHA community care programs. However, VISNs can submit requests for exceptions to the Deputy Under Secretary for Health for Operations and Management for review. VHA officials told us 11 exceptions had been granted as of February 2018. VHA-Operated CBOCs Provided Proportionally More Specialty Care and Had Higher Expenditures than Contracted CBOCs in Fiscal Years 2014 through 2016 VHA-Operated CBOCs Provided Proportionally More Specialty Care and Less Primary Care and Mental Health Care than Contracted CBOCs We found that VHA-operated CBOCs provided more specialty care and less primary care and mental health care as a proportion of their total provided services than contracted CBOCs in fiscal years 2014 through 2016. For example, in fiscal year 2016, specialty care (e.g., cardiology, gastroenterology, physical therapy) comprised 13 percent of services provided at VHA-operated clinics and 5 percent of services provided at contracted clinics. In contrast, VHA-operated clinics provided proportionally less primary care and mental health services (services offered at all CBOCs) in fiscal year 2016—these services comprised 66 percent of the services provided at VHA-operated clinics, but 84 percent of the services provided at contracted clinics. (See fig. 2.) We found that VHA-operated CBOCs provided several specialty care services that were not offered in contracted CBOCs. For example, dental care services and gastrointestinal endoscopy were provided by multiple VHA-operated clinics, but were not provided by any of the contracted clinics in fiscal year 2016. In addition, we found that VHA-operated clinics were generally larger and provided more complex services than contracted clinics. For example, multi-specialty CBOCs (clinics that provide two or more on-site specialty care services, and which may offer procedures requiring local anesthesia or sedation) were more often VHA- operated than contracted. Of the 733 CBOCs in fiscal year 2016, 210 were classified by VHA as multi-specialty, and nearly all of these (206) were VHA-operated. Officials from the four VAMCs and VISNs in our review told us decisions about what types of services CBOCs provide are made on a case-by- case basis according to local needs. For example, officials from one VAMC told us they decided to add physical therapy specialty care to one of their VHA-operated clinics based on analysis indicating that veterans’ need for this care in their community would increase. Also, officials said they wanted to alleviate the travel burden for veterans who needed the care, as the next closest VHA facility that offered this care was a 2.5-hour drive away. Officials from another VAMC told us that they approached the service needs at their clinics from a regional perspective, allowing for veteran demand for services to be met across multiple clinics in the same geographic area instead of relying on one clinic to meet the need. As a result of this approach, VAMC officials were in the process of expanding services at two of its clinics. VHA-Operated CBOCs Had Higher Expenditures than Contracted CBOCs From fiscal years 2014 through 2016, we found that VHA-operated CBOCs had higher per-encounter expenditures than contracted CBOCs—a difference ranging from 3 to 5 percent per encounter. (See table 2.) We also found that per-encounter expenditures for almost all service types were higher on average for VHA-operated CBOCs than contracted CBOCs in fiscal year 2016; the exception was mental health care services, where VHA-operated clinics’ per-encounter expenditures were 2 percent lower than for contracted clinics. The difference in per-encounter expenditures was greatest for specialty care services. For example, VHA- operated clinics’ per-encounter expenditures for specialty care services were 46 percent higher than for contracted clinics. This is in contrast to primary care, where VHA-operated clinics had 11 percent higher per- encounter expenditures, on average, compared to contracted clinics. (See fig. 3.) Officials told us that several factors can influence per-encounter expenditures, including (1) differences in provider compensation and types of providers (physicians vs. physician assistants); (2) the number of patients with complex health conditions that generally require longer visits and more costly services (as opposed to patients with well-managed conditions); and (3) geographic differences in the cost of providing care. One of our selected contracted CBOCs had one of the highest per- encounter expenditures for fiscal year 2016 among all clinics. Officials from this clinic’s VAMC told us this was due to the contractor being able to command a very high payment rate at the time of the contract award, due to temporarily strong local economic conditions, as well as being the only contractor in the area capable of providing the required services. Officials said the VAMC is in the process of awarding a new contract for this clinic. Although per-encounter expenditures were generally lower for contracted CBOCs, officials from the VISNs and VAMCs in our review told us they consider several factors in determining whether a new clinic will be VHA- operated or contracted. Such factors include the ability to directly monitor performance and implement new standards of care, as well as the ability to recruit and staff the clinic. For example, officials from two VAMCs in our review told us that VHA-operated clinics can be easier to manage because the VAMC has direct control of the clinic. Officials said this makes it easier to implement changes to VHA standards of care without the need to enter into contract modification negotiations. On the other hand, officials from three of the four VISNs and three of the four VAMCs in our review told us that contractors can be more flexible than VHA in recruiting staff (such as the ability to offer higher salaries), making a contracted clinic desirable for geographic areas where VHA has challenges recruiting or retaining providers. VHA Has Not Fully Implemented Policy Requirements, and Inaccurate Information Limits Its Oversight of CBOC Quality of Care VHA Has Not Fully Implemented CBOC Oversight Policy Requirements We found that VHA has implemented certain oversight requirements, but not others described in Directive 1229—its policy that outlines VHA’s oversight responsibilities for outpatient sites of care, including CBOCs. In terms of the oversight requirements that VHA implemented, we found it has provided reports on patient satisfaction to VISNs and VAMCs on a monthly basis. Specifically, VHA distributes the results of the VHA Survey of Healthcare Experiences, a monthly survey of veterans’ satisfaction with the care they received through VHA health care facilities. In addition, VHA implemented the requirement to make measures related to evaluating the progress of outpatient sites of care, such as data on wait times, workload, and costs, available on an internal VHA website. However, VHA has not implemented other oversight requirements, which is inconsistent with federal standards for internal control related to monitoring, which state that management should establish and operate monitoring activities to monitor the internal control system and evaluate the results. We found that VHA has not implemented the following requirements in Directive 1229: VHA has not developed guidelines for monitoring the quality and comprehensiveness of care in CBOCs. Officials from the three VHA offices with responsibility for collaborating to develop guidelines for monitoring the quality and comprehensiveness of care in CBOCs, as required in the policy, told us that they are not currently developing these guidelines and they have no plans to do so. First, officials from the office of the Assistant Deputy Under Secretary for Health for Policy and Planning told us they had not developed these guidelines because they no longer believed it was their office’s responsibility, despite the fact that officials from the office had helped to develop the recently issued policy. Second, officials from the office of the Deputy Under Secretary for Health for Organizational Excellence told us that their office was not responsible for addressing the broader issue of monitoring clinics. Third, officials from the office of the Deputy Under Secretary for Health for Operations and Management told us that although they do not have formal guidelines in place, they believe their office meets the Directive 1229 requirement as part of their regular VISN oversight. Officials said they collect and review VISN- level performance data, such as patient satisfaction data, which can be broken down to the level of the CBOC if there is a performance problem. However, VHA may miss clinic performance problems that are not identifiable in the VISN-level data. In addition, without developing such guidelines, VHA has not established standardized processes for how it monitors CBOCs, which can lead to inconsistent oversight. This poses the risk that veterans may be subject to different standards of care depending on the clinic visited. VISNs do not conduct continuous quality monitoring of CBOCs to ensure that consistent, quality care is being delivered. We found that three of the four VISNs in our review largely delegated oversight of the CBOCs to the VAMCs, rather than conducting continuous quality monitoring as required in the policy. Specifically, officials from these VISNs said that they largely focus their oversight on the VAMCs and do not separately review the performance of every CBOC unless the VAMC informs them of a quality problem at a particular clinic. Officials from the remaining VISN in our review said they do conduct CBOC-specific oversight activities. Specifically, this VISN had created a performance review survey tool that it sends to each clinic on an annual basis, and the results are reviewed by a workgroup made up of VISN staff. The workgroup examines trends across the CBOCs, including a comparison of VHA-operated and contracted performance. For example, one question in the tool asks how an individual CBOC’s performance compares with others overseen by the VAMC. The delegation of oversight responsibility for the CBOCs to the VAMCs without consistent VISN-level oversight creates the potential for inconsistencies in oversight, which does not align with VHA policy to provide one standard of care for all clinics. Consequently, veterans may be subject to different standards of care across clinics. The Deputy Under Secretary for Health for Operations and Management has not reviewed CBOC performance with VISNs as part of the quarterly VISN performance reviews. The Deputy Under Secretary for Health for Operations and Management is responsible for conducting reviews of VISN performance with each VISN director. Specifically, the office of the Deputy Under Secretary for Health for Operations and Management is required by VHA policy to review CBOC-level performance data during quarterly VISN performance reviews. However, officials from this office and two of the VISNs we contacted told us they do not specifically do this unless the VISN identifies a performance problem. Of the remaining two VISNs, officials at one VISN reported only having mid-year and year-end meetings with VHA central office at which they did not specifically discuss the CBOCs, and officials from the other VISN said they did not have any regular quarterly performance reviews with VHA central office. This lack of consistent oversight poses the risk that VHA is not providing one, high quality standard of care to veterans across CBOCs. VHA’s CBOC Report Lacks Accurate and Complete Information Directive 1229 requires VHA to provide reports to the VISNs and VAMCs on CBOC quality of care on a quarterly and year-end basis. We found that the CBOC Report, which is VHA’s only report that allows for comparing clinical quality of care data across VHA-operated and contracted CBOCs, lacks accurate and complete information. These gaps limit the CBOC Report’s usefulness as a monitoring tool to determine whether VHA-operated and contracted CBOCs are providing the same standard of care. This is inconsistent with federal standards for internal control for information and communication, which state that management should use quality information to achieve the entity’s objectives. Specifically, VHA distributes the CBOC Report to VISNs and VAMCs on a quarterly and year-end basis, which compiles CBOC quality of care performance results based on the Healthcare Effectiveness Data and Information Set (HEDIS)—an industry standard set of quality measures. VISNs and VAMCs have access to other types of CBOC performance data, such as patient satisfaction data and wait time data, but these data are not used to assess clinical quality of care and they cannot be used to examine performance across all CBOCs or stratified by VHA-operated versus contracted CBOCs. In contrast, the CBOC Report allows for the comparison of clinical quality of care data across all CBOCs, which can be stratified according to whether the clinic is VHA-operated or contracted. However, we found the following issues with the CBOC Report: Incorrect classification of CBOCs. We compared CBOCs from the most recent CBOC Report at the time of our review (the first quarter of fiscal year 2017) against sites in the VAST system as of January 3, 2017, which is VHA’s listing of all VHA sites of care and their characteristics. We found that 22 percent of sites were incorrectly classified as CBOCs, based on the site classifications in VAST. Several of these sites were much more complex, such as health care centers and VAMCs. For example, a VAMC was included in the report as a CBOC, but this VAMC has three specialized intensive care units and serves as a regional referral center for intensive inpatient surgery, including open heart surgery. In addition, we also identified sites included in the report that provided less complex services than those that are provided in CBOCs, such as other outpatient services sites. VHA officials who produce the CBOC Report told us that, prior to the establishment of the VAST site classifications in 2014, they used their judgment to classify existing sites of care as CBOCs and they have not updated their classifications since then. For sites established since 2014, officials told us they use the VAST site classifications, but may also use their judgment in certain situations. For example, if a site’s classification changed in VAST from a non-CBOC to a CBOC, they would make a decision about whether to classify it as a CBOC in the report by examining various aspects of the facility, such as the services provided and encounters. This procedure differs from what is documented in the methodology section of the CBOC Report, which states that site classifications are based on VAST. Further, VHA officials said they did not have a document available that outlined how they make these decisions. Because the site classifications in the CBOC Report are based, in part, on officials’ judgment in addition to the classifications in VAST, the report does not present accurate information on CBOCs across VHA and is of limited usefulness to VHA as a tool to ensure that VHA-operated and contracted CBOCs are providing the same standard of care that is of high quality. Missing CBOCs. We found that 53 CBOCs (7 percent of all CBOCs) were missing from the CBOC Report from the first quarter of fiscal year 2017, rendering the data incomplete. VHA officials provided examples of why a CBOC might not be included in the report. For example, a newer CBOC might not be included because it did not have quality of care data available at the time the report was developed. However, we identified several other sites that were listed in the report, despite unavailable data. Inaccurate summary calculations. Due to the incorrect site classifications and missing CBOCs, the national- and VISN-level summary calculations of performance in the CBOC Report were also inaccurate. Specifically, the report includes national- and VISN-level averages for each HEDIS measure, which VHA officials can use as benchmarks for clinic performance. These averages were over- inclusive—incorporating performance results from additional sites that were not CBOCs, and under-inclusive—omitting performance results from CBOCs that were missing from the report. These inaccuracies may lead VHA officials to draw incorrect conclusions about the quality of care provided in CBOCs. For example, officials from one VAMC told us that they use the national averages as benchmarks against which they compare the performance of their CBOCs. Because this VAMC requires CBOCs with lower-than-average HEDIS performance results to develop a formal action plan to improve performance, officials may not be identifying clinics that are in need of an action plan due to the inaccuracy of the averages. In addition, VHA central office officials who develop the CBOC Report said that the results from recent reports have shown that VHA-operated and contracted clinics in general provided the same standard of care, but this conclusion may not be correct as it is based on unreliable data. No guidance or training for use of the CBOC Report. VHA central office officials do not provide guidance or training specific to the CBOC Report to assist VISNs and VAMCs in using it to oversee CBOCs. This is inconsistent with federal standards for internal control related to the control environment, which state that management should, among other things, develop personnel to achieve the entity’s objectives. Such development may include training to enable individuals to develop competencies appropriate for key roles. In our review of the CBOC Report from the first quarter of fiscal year 2017, we found that in several places in the report, shorthand text and acronyms were used, but not defined. In addition, although there is a methodology section, it is not clear that the measures described in the report are HEDIS measures, for which VHA makes training available. Several VAMC and VISN officials stated that guidance or training that is specific to understanding the CBOC Report would be helpful. If VISNs and VAMCs are not trained on how to use the report, they may not know how to use it to oversee CBOCs and ensure they are providing one standard of care that is of high quality. No requirement for VISNs or VAMCs to use the CBOC Report. VHA does not require that the CBOC Report be used as a tool to oversee CBOCs. As a result, we found that the report was not widely used. Specifically, an official from the office of the Deputy Under Secretary for Health for Organizational Excellence—which produces the CBOC Report—told us that the office’s role is to compile the reports and distribute them, but not to monitor performance. Officials from the office of the Deputy Under Secretary for Health for Operations and Management said that VISNs and VAMCs are expected to use the report as part of their CBOC oversight; however, we found there is no requirement that they do so. We found that officials from three of the four VISNs and three of the four VAMCs in our review were not regularly using the CBOC Report. Officials from one of the four VAMCs and one of the four VISNs in our review were using it as part of CBOC oversight activities at the time of our review. Officials from another VISN said that they planned to start using the CBOC Report after we made them aware of it during our interview. If VISN and VAMC officials do not use the report as a part of their oversight, they may be missing opportunities to compare VHA- operated and contracted CBOCs and ensure they are providing one standard of care that is of high quality. Conclusions CBOCs are an integral part of VHA’s health care delivery system, and VHA requires that such clinics, whether VHA-operated or contracted, provide the same standard of care to veterans that is of high quality. Although VHA has implemented certain policy requirements for CBOC oversight, we found several weaknesses in its oversight that make it difficult to determine whether it is ensuring this consistent standard of care across the clinics. Specifically, VHA has not fully implemented oversight requirements that align with its established policies, including a requirement to establish guidelines for overseeing CBOC quality of care. The CBOC Report, as VHA’s only report comparing clinical quality of care across both VHA- operated and contracted clinics, could be an important part of those guidelines. However, as it currently stands, the report is inaccurate and incomplete and VISNs and VAMCs are not trained on or required to use it; thus, it is of limited use to VHA, including the VISNs and VAMCs that have responsibility for CBOC oversight. As a result, VHA lacks assurance that both VHA-operated and contracted CBOCs are providing one standard of care that is of high quality. Recommendations for Executive Action We are making the following four recommendations to the VHA Undersecretary for Health: Implement oversight requirements that align with VHA’s existing policy, including developing guidelines for monitoring quality of care in CBOCs. (Recommendation 1) Establish a process for regularly updating the CBOC Report to ensure it contains an accurate and complete list of CBOCs that is consistent with VHA’s established site classifications. (Recommendation 2) Ensure that VISNs and VAMCs receive guidance or training on how to use the CBOC Report. (Recommendation 3) Require the use of the CBOC Report as an oversight tool for ensuring one standard of care that is of high quality across VHA-operated and contracted CBOCs. (Recommendation 4) Agency Comments We provided VA with a draft of this report for its review and comment. VA provided written comments, which are reprinted in appendix I. In its written comments, VA concurred with all four of the report’s recommendations, and identified actions it is taking to implement them. We are sending copies of this report to the appropriate congressional committees, the Acting Secretary of Veterans Affairs, the Under Secretary for Health, and other interested parties. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at draperd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Veterans Affairs Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Debra A. Draper, (202) 512-7114 or draperd@gao.gov. Staff Acknowledgments In addition to the contact named above, Janina Austin, Assistant Director; Malissa G. Winograd, Analyst-in-Charge; Jennie F. Apter; Zhi Boon; Keith Haddock; and Sarah-Lynn McGrath made key contributions to this report. Also contributing were Jacquelyn Hamilton and Vikki Porter.
Why GAO Did This Study In fiscal year 2016, VHA's 733 CBOCs provided care to more than 3 million veterans at a cost of $5.3 billion. Although most of these clinics are VHA-owned and -operated, 101 are operated through contracts with non-VHA organizations. VHA policy states that CBOCs, whether VHA-operated or contracted, must provide one standard of care that is of high quality. GAO was asked to review VHA's use of contracts to carry out core functions. This report examines, among other issues, the extent to which VHA oversees CBOC operations. To conduct this work, GAO reviewed VHA's policies and CBOC Report. GAO also interviewed officials from VHA's central office and from a nongeneralizable sample of eight CBOCs and their four respective VAMCs and VISNs. The CBOCs were selected for variation in factors such as contract status and geographic area. What GAO Found Community-based outpatient clinics (CBOC) are an important part of the Department of Veterans Affairs' (VA) Veterans Health Administration (VHA) health care delivery system. These clinics are geographically separate from VA medical centers (VAMC) and provide outpatient services, including primary care and mental health care. GAO found weaknesses in VHA's oversight of CBOCs: Incomplete policy implementation. VHA has not implemented certain CBOC oversight requirements as outlined in its policy. Specifically, VHA has not developed guidelines for monitoring the quality and comprehensiveness of care in CBOCs and officials said they have no plans to do so. Officials told GAO they believe the requirement was met as part of their regular oversight of Veterans Integrated Service Networks (VISN)—regional networks responsible for oversight of VAMCs and CBOCs. However, VHA may miss CBOC performance problems that are not identifiable in VISN-level data. Further, although policy requires VHA central office officials to review CBOC performance as part of quarterly VISN performance reviews, officials said they do not specifically do so unless the VISN identifies a problem. Officials from three of the four VISNs in GAO's review said they largely delegate CBOC oversight to VAMCs, and do not separately review clinic performance unless a VAMC identifies a problem. An inaccurate and incomplete CBOC Report. VHA's CBOC Report is prepared by VHA central office and distributed to VISNs and VAMCs quarterly and at year-end. The CBOC Report could be useful to compare clinical quality of care between VHA-operated and contracted CBOCs, but it is inaccurate and incomplete. Specifically, VHA officials have used their judgment to classify certain sites as CBOCs in the report, rather than use the official classifications in policy. GAO found that 22 percent of sites were incorrectly classified as CBOCs when they were other types of sites, including VAMCs. As a result, the report is of limited usefulness to VHA as an oversight tool. Lack of guidance or training on the CBOC Report. VHA central office officials do not provide guidance or training specific to understanding the CBOC Report to assist VISNs and VAMCs in their oversight of CBOCs. GAO found that in several places in the report, shorthand text and acronyms were used, but not defined. In addition, several VISN and VAMC officials stated that guidance or training would be helpful. No requirement to use the CBOC Report. VHA officials told GAO that VAMCs and VISNs are expected to use the CBOC Report as an oversight tool, but GAO found that VHA lacks a requirement that they do so. Officials from three of the four VISNs and three of the four VAMCs in GAO's review were not using the report. These weaknesses potentially lead to inconsistent oversight and create a risk that VHA is not providing one standard of care that is of high quality to veterans across VHA-operated and contracted CBOCs. What GAO Recommends GAO recommends that VHA (1) implement oversight requirements that align with existing policy; (2) establish a process to ensure the CBOC Report is accurate and complete; (3) provide guidance or training to VISNs and VAMCs on how to use the CBOC Report; and (4) require use of the CBOC Report as an oversight tool. VA concurred with all of GAO's recommendations and identified actions it is taking to implement them.
gao_GAO-18-28
gao_GAO-18-28_0
Background Human spaceflight at NASA began in the 1960s with the Mercury and Gemini programs leading up to the Apollo moon landings. After the last lunar landing, Apollo 17, in 1972, NASA shifted its attention to low earth orbit operations with human spaceflight efforts that included the Space Shuttle and International Space Station programs through the remainder of the 20th century. In the early 2000s, NASA once again turned its attention to cislunar and deep space destinations, and in 2005 initiated the Constellation program, a human exploration program that was intended to be the successor to the Space Shuttle. The Constellation program was canceled, however, in 2010 due to factors that included cost and schedule growth and funding gaps. Following Constellation, the National Aeronautics and Space Administration Authorization Act of 2010 directed NASA to develop a Space Launch System, to continue development of a crew vehicle, and prepare infrastructure at Kennedy Space Center to enable processing and launch of the launch system. To fulfill this direction, NASA formally established the SLS program in 2011. Then, in 2012, the Orion project transitioned from its development under the Constellation program to a new development program aligned with SLS. To transition Orion from Constellation, NASA adapted the requirements from the former Orion plan with those of the newly created SLS and the associated ground systems programs. In addition, NASA and the European Space Agency agreed that it would provide a portion of the service module for Orion. Figure 1 provides details about the heritage of each SLS hardware element and its source as well as identifies the major portions of the Orion crew vehicle. The EGS program was established to modernize the Kennedy Space Center to prepare for integrating hardware from the three programs as well as processing and launching SLS and Orion and recovery of the Orion crew capsule. EGS is made up of nine major components, including: the Vehicle Assembly Building, Mobile Launcher, Launch Control Center and software, Launch Pad 39B, Crawler-Transporter, Launch Equipment Test Facility, Spacecraft Offline Processing, Launch Vehicle Offline Processing, and Landing and Recovery. See figure 2 for pictures of the Mobile Launcher, Vehicle Assembly Building, Launch Pad 39B, and Crawler-Transporter. NASA’s Exploration Systems Development (ESD) organization is responsible for directing development of the three individual human spaceflight programs—SLS, Orion, and EGS—into a human space exploration system. The integration of these programs is key because all three systems must work together for a successful launch. The integration activities for ESD’s portfolio occur at two levels in parallel throughout the life of the programs: as individual efforts to integrate the various elements managed within the separate programs and as a joint effort to integrate the three programs into an exploration system. The three ESD programs support NASA’s long term goal of sending humans to distant destinations, including Mars. NASA’s approach to developing and demonstrating the technologies and capabilities to support their long term plans for a crewed mission to Mars includes three general stages of activities—Earth Reliant, Proving Ground, and Earth Independent. Earth Reliant: From 2016 to 2024, NASA’s planned exploration is focused on research aboard the International Space Station. On the International Space Station, NASA is testing technologies and advancing human health and performance research that will enable deep space, long duration missions. Proving Ground: From the mid-2020s to early-2030s, NASA plans to learn to conduct complex operations in a deep space environment that allows crews to return to Earth in a matter of days. Primarily operating in cislunar space—the volume of space around the moon featuring multiple possible stable staging orbits for future deep space missions—NASA will advance and validate capabilities required for humans to live and work at distances much farther away from our home planet, such as on Mars. Earth Independent: From the early-2030s to the mid-2040s, planned activities will build on what NASA learns on the space station and in deep space to enable human missions to the vicinity of Mars, possibly to low-Mars orbit or one of the Martian moons, and eventually the Martian surface. The first launch of the integrated ESD systems, EM-1, is a Proving Ground mission. EM-1 is planned as an uncrewed test flight currently planned for no earlier than October 2019 that will fly about 70,000 kilometers beyond the moon. The second launch, Exploration Mission 2 (EM-2), which will utilize an evolved SLS variant with a more capable upper stage, is also a Proving Ground mission planned for no later than April 2023. EM-2 is expected to be a 10- to 14-day crewed flight with up to four astronauts that will orbit the moon and return to Earth to demonstrate the baseline Orion vehicle capability. NASA eventually plans to develop larger and more capable versions of the SLS to support Proving Ground and Earth Independent missions after EM-2. As noted above, in April 2017 we found that given the combined effects of ongoing technical challenges in conjunction with limited cost and schedule reserves, it was unlikely that the ESD programs would achieve the November 2018 launch readiness date. We recommended that NASA confirm whether the EM-1 launch readiness date of November 2018 was achievable, as soon as practicable but no later than as part of its fiscal year 2018 budget submission process. We also recommended that NASA propose a new, more realistic EM-1 date if warranted. NASA agreed with both recommendations and stated that it was no longer in its best interest to pursue the November 2018 launch readiness date. Further, NASA stated that, in fall 2017, it planned to establish a new launch readiness date. Subsequently, in June 2017, NASA sent notification to Congress that EM-1’s recommended launch date would be no earlier than October 2019. The life cycle for NASA space flight projects consists of two phases— formulation, which takes a project from concept to preliminary design, and implementation, which includes building, launching, and operating the system, among other activities. NASA further divides formulation and implementation into pre-phase A through phase F. Major projects must get approval from senior NASA officials at key decision points before they can enter each new phase. The three ESD programs are completing design and fabrication efforts prior to beginning Phase D system assembly, integration and test, launch and checkout. Figure 3 depicts NASA’s life cycle for space flight projects. NASA’s Integration Approach Offers Some Benefits but Complicates Oversight and Impairs Independence NASA’s approach for integrating and assessing programmatic and technical readiness, executed by ESD, differs from prior NASA human spaceflight programs. This new approach offers some cost and potential efficiency benefits. However, it also brings challenges specific to its structure. In particular, there are oversight challenges because only one of the three programs, Orion, has a cost and schedule estimate for EM-2. NASA is already contractually obligating money on SLS and EGS for EM- 2, but the lack of cost and schedule baselines for these programs will make it difficult to assess progress over time. Additionally, the approach creates an environment of competing interests because it relies on dual- hatted staff to manage technical and safety aspects on behalf of ESD while also serving as independent oversight of those same areas. Integration Approach Differs from Past Human Spaceflight Programs NASA is managing the human spaceflight effort differently than it has in the past. Historically, NASA used a central management structure to manage human spaceflight efforts for the Space Shuttle and the Constellation programs. For example, both the Shuttle and Constellation programs were organized under a single program manager and used a contractor to support integration efforts. Additionally, the Constellation program was part of a three-level organization—the Exploration Systems Mission Directorate within NASA headquarters, the Constellation program, and then projects, including the launch vehicle, crew capsule, ground systems, and other lunar-focused projects, managed under the umbrella of Constellation. Figure 4 illustrates the three-level structure used in the Constellation program. In the Constellation program, the programmatic workforce was distributed within the program and projects. For example, systems engineering and integration organizations—those offices responsible for making separate technical designs, analyses, organizations and hardware come together to deliver a complete functioning system—were embedded within both the Constellation program and within each of the projects. NASA’s current approach is organized with ESD, rather than a contractor, as the overarching integrator for the three separate human spaceflight programs—SLS, Orion, and EGS. ESD manages both the programmatic and technical cross-program integration, and primarily relies on personnel within each program to implement its integration efforts. Exploration Systems Integration, an office within ESD, leads the integration effort from NASA headquarters. ESD officials stated that this approach is similar to that used by the Apollo program, wherein the program was also managed out of NASA headquarters. Within Exploration Systems Integration, the Cross-Program Systems Integration sub-office is responsible for technical integration and the Programmatic and Strategic Integration sub-office is responsible for integrating the financial, schedule, risk management, and other programmatic activities of the three programs. The three programs themselves perform the hardware and software integration activities. This organizational structure that consists of two levels is shown in figure 5. ESD is executing a series of six unique integration-focused programmatic and technical reviews at key points within NASA’s acquisition life cycle, as shown in figure 6, to assess whether NASA cost, schedule, and technical commitments are being met for the three-program enterprise. These reviews cover the life cycle of the integrated programs to EM-1, from formulation to readiness to launch. Some of these reviews are unique to ESD’s role as integration manager, For example, ESD established two checkpoints—Design to Sync in 2015 and Build to Sync in 2016. The purpose of Design to Sync was to assess the ability of the integrated preliminary design to meet system requirements, similar to a preliminary design review and the purpose of Build to Sync was to assess the maturity of the integrated design in readiness for assembly, integration, and test, similar to a critical design review (CDR). At both events, NASA assessed the designs as ready to proceed. Key participants in these integration reviews include ESD program personnel and the Cross-Program Systems Integration and Programmatic and Strategic Integration staff that are responsible for producing and managing the integration activities. ESD’s Integration Approach Offers Some Cost Avoidance and Potential Efficiency Gains ESD’s integration approach offers some benefits in terms of cost avoidance relative to NASA’s most recent human spaceflight effort, the Constellation program. NASA estimated it would need $190 million per year for the Constellation program integration budget. By comparison, between fiscal years 2012 and 2017, NASA requested an average of about $84 million per year for the combined integration budgets of the Orion, SLS, EGS, and ESD. This combined average of about $84 million per year represents a significant decrease from the expected integration budget of $190 million per year under the Constellation program. In addition, as figure 7 shows, NASA’s initial estimates for ESD’s required budget for integration are close to the actuals for fiscal years 2012-2017. NASA originally estimated that ESD’s budget for integration would require approximately $30 million per year. ESD’s integration budget was less than $30 million in fiscal years 2012 and 2013 and increased to about $40 million in fiscal year 2017—an average of about $30 million a year. According to NASA officials, some of the cost avoidance can be attributed to the difference in workforce size. The Constellation program’s systems engineering and integration workforce was about 800 people in 2009, the last full year of the program; whereas ESD’s total systems engineering and integration workforce in 2017 was about 500 people, including staff resident in the individual programs. ESD officials also stated that, in addition to cost avoidance, their approach provides greater efficiency. For example, ESD officials said that decision making is much more efficient in the two-level ESD organization than Constellation’s three-level organization because the chain of command required to make decisions is shorter and more direct. ESD officials also indicated that the post-Constellation elimination of redundant systems engineering and integration staff at program and project levels contributed to efficiency. Additionally, they stated that program staff are invested in both their respective programs and the integrated system because they work on behalf of the programs and on integration issues for ESD. Finally, they said another contribution to increased efficiency was NASA’s decision to establish SLS, Orion, and EGS as separate programs, which allowed each program to proceed at its own pace. One caveat to this benefit, however, is that ESD’s leaner organization is likely to face challenges to its efficiency in the integration and test phases of the SLS, Orion, and EGS programs. We analyzed the rate at which ESD has reviewed and approved the different types of launch operations and ground processing configuration management records for integrated SLS, Orion, and EGS operations, and found that the process is proceeding more slowly than ESD anticipated. For example, as figure 8 illustrates, ESD approved 403 fewer configuration management records than originally planned in the period from March 2016 through June 2017. According to an ESD official, the lower-than-planned approval rate resulted from the time necessary to establish and implement a new review process as well as final records being slower to arrive from the programs for review than ESD anticipated. Additionally, the official stated that the records required differing review timelines because they varied in size and scope. As figure 8 shows, ESD originally expected the number of items that needed review and approval to increase and create a “bow wave” during 2017 and 2018. In spring 2017, ESD re-planned its review and approval process and flattened the bow wave. The final date for review completion is now aligned with the new planned launch readiness date of no earlier than October 2019, which added an extra year to ESD’s timeframe to complete the record reviews. While the bow wave is not as steep as it was under the original plan, ESD will continue to have a large number of records that require approval in order to support the launch readiness date. An ESD official stated that NASA had gained experience managing such a bow wave as it prepared for Orion’s 2014 exploration flight test launch aboard a Delta IV rocket and as part of the Constellation program’s prototype Ares launch in 2009, but acknowledged that ESD will need to be cautious that its leaner staff is not overwhelmed with documentation, which could slow down the review process. ESD’s Approach Complicates Oversight Because There Is No Mechanism to Assess Affordability beyond First Mission ESD is responsible for overall affordability for SLS, Orion, and EGS, while each of the programs develops and maintains an individual cost and schedule baseline. The baseline is created at the point when a program receives NASA management approval to proceed into final design and production. In their respective baselines, as shown in table 1, SLS and EGS cost and schedule are baselined to EM-1, and Orion’s are baselined to EM-2. NASA documentation indicates that Orion’s baselines are tied to EM-2 because that is the first point at which it will fulfill its purpose of carrying crew. Should NASA determine it is likely to exceed its cost estimate baseline by 15 percent or miss a milestone by 6 months or more, NASA is required to report those increases and delays—along with their impacts—to Congress. In June 2017, NASA sent notification to Congress that the schedule for EM-1 has slipped beyond the allowed 6- month threshold, but stated that cost is expected to remain within the 15 percent threshold. NASA has not established EM-2 cost baselines or expected total life- cycle costs for SLS and EGS, including costs related to the larger and more capable versions of SLS needed to implement the agency’s plans to send crewed missions to Mars. GAO’s Cost Estimating and Assessment Guide, a guidebook of cost estimating best practices developed in concert with the public and private sectors, identifies baselines as a critical means for measuring program performance over time and addresses how a baseline backed by a realistic cost estimate increases the probability of a program’s success. In addition, prior GAO work offers insight into the benefits of how baselines enhance a program’s transparency. For example, we found in 2009 that costs for the Missile Defense Agency’s (MDA) ballistic missile defense system had grown by at least $1 billion, and that lack of baselines for each block of capability hampered efforts to measure progress and limited congressional oversight of MDA’s work. MDA responded to our recommendation to establish these baselines and, in 2011, we reported that MDA had a new process for setting detailed baselines, which had resulted in a progress report to Congress more comprehensive than the one it provided in 2009. To that end, we have made recommendations in the past on the need for NASA to baseline the programs’ costs for capabilities beyond EM-1; however, a significant amount of time has passed without NASA taking steps to fully implement these recommendations. Specifically, in May 2014, we recommended that, to provide Congress with the necessary insight into program affordability, ensure its ability to effectively monitor total program costs and execution, and to facilitate investment decisions, NASA’s Administrator should direct the Human Exploration and Operations Mission Directorate to: Establish a separate cost and schedule baseline for work required to support the SLS for EM-2 and report this information to the Congress through NASA’s annual budget submission. If NASA decides to fly the SLS configuration used in EM-2 beyond EM-2, establish separate life cycle cost and schedule baseline estimates for those efforts, to include funding for operations and sustainment, and report this information annually to Congress via the agency’s budget submission; and Establish separate cost and schedule baselines for each additional capability that encompass all life cycle costs, to include operations and sustainment, because NASA intends to use the increased capabilities of the SLS, Orion, and ground support efforts well into the future and has chosen to estimate costs associated with achieving the capabilities. As part of the latter recommendation, we stated that, when NASA could not fully specify costs due to lack of well-defined missions or flight manifests, the agency instead should forecast a cost estimate range— including life cycle costs—having minimum and maximum boundaries and report these baselines or ranges annually to Congress via the agency’s budget submission. In its comments on our 2014 report, NASA partially concurred with these two recommendations, noting that much of what it had already done or expected to do would address them. For example, the agency stated that establishing the three programs as separate efforts with individual cost and schedule commitments met GAO’s intent as would its plans to track and report development, operations, and sustainment costs in its budget to Congress as the capabilities evolved. In our response, we stated that while NASA’s prior establishment of three separate programs lends some insight into expected costs and schedule at the broader program level, it does not meet the intent of the two recommendations because cost and schedule identified at that level is unlikely to provide the detail necessary to monitor the progress of each block against a baseline. Further, reporting the costs via the budget process alone will not provide information about potential costs over the long term because budget requests neither offer all the same information as life-cycle cost estimates nor serve the same purpose. Life-cycle cost estimates establish a full accounting of all program costs for planning, procurement, operations and maintenance, and disposal and provide a long-term means to measure progress over a program’s life span. In 2016, NASA requested closure of these recommendations, citing, among other factors, changes to the programs’ requirements, design, architecture, and concept of operations. However, NASA’s request did not identify any steps taken to meet the intent of these two recommendations, such as establishing cost and schedule baselines for EM-2, baselines for each increment of SLS, Orion, or ground systems capability, or documentation of life cycle cost estimates with minimum and maximum boundaries. Further, a senior level ESD official told us that NASA does not intend to establish a baseline for EM-2 because it is not required to do so. The limited scope that NASA has chosen to use as the basis for formulating the programs’ cost baselines does not provide the transparency necessary to assess long-term affordability. Plainly, progress cannot be assessed without a baseline that serves as a means to compare current costs against expected costs; consequently, it becomes difficult to assess program affordability and for Congress to make informed budgetary decisions. NASA’s lack of action in regards to our 2014 recommendations means that it is now contractually obligating NASA to spend billions of dollars in potential costs for EM-2 and beyond without a baseline against which to assess progress. For example: in fiscal year 2016, the SLS program awarded two contracts to Aerojet Rocketdyne: a $175 million contract for RL-10 engines to power the exploration upper stage during EM-2 and EM-3 and a $1.2 billion contract to restart the RS-25 production line required for engines for use beyond EM-4, and to produce at least 4 additional RS-25 engines; in 2017, SLS modified the existing Boeing contract upwards by $962 million for work on the exploration upper stage that SLS will use during EM-2 and future flights; and on a smaller scale, in fiscal year 2016 the EGS program obligated $4.8 million to support the exploration upper stage and EM-2. As illustrated by these contracting activities, the SLS program is obligating more funds for activities beyond EM-1 than Congress directed. Specifically, of approximately $2 billion appropriated for the SLS program, the Consolidated Appropriations Act, 2016 directed that NASA spend not less than $85 million for enhanced upper stage development for EM-2. NASA has chosen to allocate about $360 million of its fiscal year 2016 SLS appropriations towards EM-2, including enhanced upper stage development, additional performance upgrades, and payload adapters, without a baseline to measure progress and ensure transparency. The NASA Inspector General (IG) also recently reported that NASA is spending funds on EM-2 efforts without a baseline in place and expressed concerns about the need for EM-2 cost estimates. Because NASA has not implemented our recommendations, it may now be appropriate for Congress to take action to require EM-2 cost and schedule baselines for SLS and EGS, and separate cost and schedule baselines for additional capabilities developed for Orion, SLS, and EGS for missions beyond EM-2. These baselines would be important tools for Congress to make informed, long-term budgetary decisions with respect to NASA’s future exploration missions, including Mars. Organizational Structure Impairs Independence of Engineering and Safety Technical Oversight NASA’s governance model prescribes a management structure that employs checks and balances among key organizations to ensure that decisions have the benefit of different points of view and are not made in isolation. As part of this structure, NASA established the technical authority process as a system of checks and balances to provide independent oversight of programs and projects in support of safety and mission success through the selection of specific individuals with delegated levels of authority. The technical authority process has been used in other parts of the government for acquisitions, including the Department of Defense and Department of Homeland Security. ESD is organizationally connected to three technical authorities within NASA. The Office of the Chief Engineer technical authority is responsible for ensuring from an independent standpoint that the ESD engineering work meets NASA standards, The Office of Safety and Mission Assurance technical authority is responsible for ensuring from an independent standpoint that ESD products and processes satisfy NASA’s safety, reliability, and mission assurance policies, and The Office of Chief Health and Medical technical authority is responsible for ensuring from an independent standpoint that ESD programs meet NASA’s health and medical standards. These NASA technical authorities have delegated responsibility for their respective technical authority functions directly to ESD staff. According to NASA’s project management requirements, the program or project manager is ultimately responsible for the safe conduct and successful outcome of the program or project in conformance with governing requirements and those responsibilities are not diminished by the implementation of technical authority. ESD has established an organizational structure in which the technical authorities for engineering and safety and mission assurance (S&MA) are dual hatted to also serve simultaneously in programmatic positions. The chief engineer technical authority also serves as the Director of ESD’s Cross Program System Integration Office and the S&MA technical authority also serves as the ESD Safety and Mission Assurance Manager. In their programmatic roles for ESD, the individuals manage resources, including budget and schedule, to address engineering and safety issues. In their technical authority roles, these same individuals are to provide independent oversight of programs and projects in support of safety and mission success. Having the same individual simultaneously fill both a technical authority role and a program role creates an environment of competing interests where the technical authority may be subject to impairments in their ability to impartially and objectively assess the programs while at the same time acting on behalf of ESD in programmatic capacities. This duality makes them more subject to program pressures of cost and schedule in their technical authority roles. Figure 9 describes some of the conflicting roles and responsibilities of these officials in their two different positions. The concurrency of duties leaves the positions open to conflicting goals of safety, cost, and schedule and increases the potential for the technical authorities to become subject to cost and schedule pressures. For example: the dual-hatted engineering and S&MA technical authorities serve on decision-making boards both in technical authority and programmatic capacities, making them responsible for providing input on technical and safety decisions while also keeping an eye on the bottom line for ESD’s cost and schedule; and the technical authorities are positioned such that they have been the reviewers of the ESD programmatic areas they manage—in essence, “grading their own homework.” For example, at ESD’s Build to Sync review in 2016, the engineering and S&MA technical authorities evaluated the areas that they manage in their respective capacities as ESD Director of Cross Program System Integration and ESD Safety and Mission Assurance Manager. This process relied on their abilities as individuals to completely separate the two hats—using one hand to put on the ESD hat and manage technical and safety issues within programmatic cost and schedule constraints and using the other hand to take off that hat and assess the same issues with an independent eye. NASA officials identified several reasons why the dual-hat structure works for their purposes. Agency officials stated that one critical factor to successful dual-hatting is having the “right” people in those dual-hat positions—that is, personnel with the appropriate technical knowledge to do the work and the ability to act both on behalf of ESD and independent of it. Officials also indicated that technical authorities retain independence because their technical authority reporting paths and performance reviews are all within their technical authority chain of command rather than under the purview of the ESD chain of command. Additionally, agency officials said that dual-hat roles are a commonplace practice at NASA and cited other factors in support of the approach, including that: it would not be an efficient use of resources to have an independent technical authority with no program responsibilities because that person would be unlikely to have sufficient program knowledge to provide useful insight and could slow the program’s progress; a technical authority that does not consider cost and schedule is not helpful to the program because it is unrealistic to disregard those aspects of program management; a strong dissenting opinion process is in place and allows for issues to be raised through various levels to the Administrator level within NASA; and ESD receives additional independent oversight through three NASA internal organizations—the independent review teams that provide independent assessments of a program’s technical and programmatic status and health at key points in its life cycle; the NASA Engineering and Safety Center that conducts independent safety and mission success-related testing, analysis, and assessments of NASA’s high- risk projects; and the Aerospace Safety Advisory Panel (ASAP) that independently oversees NASA’s safety performance. These factors that NASA officials cite in support of the dual-hat approach minimize the importance of having independent oversight and place ESD at risk of fostering an environment in which there is no longer a balance between preserving safety with the demands of maintaining cost and schedule. The Columbia Accident Investigation Board (CAIB) report—the result of an in-depth assessment of the technical and organizational causes of the Columbia accident—concluded that NASA’s organization for the Shuttle program combined, among other things, all authority and responsibility for schedule, cost, safety, and technical requirements and that this was not an effective check and balance. The CAIB report recommended that NASA establish a technical authority to serve independently of the Space Shuttle program so that employees would not feel hampered to bring forward safety concerns or disagreements with programmatic decisions. The Board’s findings that led to this recommendation included a broken safety culture in which it was difficult for minority and dissenting opinions to percolate up through the hierarchy; dual Center and programmatic roles vested in one person that had confused lines of authority, responsibility, and accountability and made the oversight process susceptible to conflicts of interest; and oversight personnel in positions within the program, increasing the risk that these staffs’ perspectives would be hindered by too much familiarity with the programs they were overseeing. ESD officials stated that they had carefully and thoughtfully implemented the intent of the CAIB; they said they had not disregarded its finding and recommendations but instead established a technical authority in such a way that it best fit the context of ESD’s efforts. These officials did acknowledge, though, that the dual hat approach does not align with the CAIB report’s recommendation to separate programmatic and technical authority or with NASA’s governance framework. Further, over the course of our review, we spoke with various high-ranking officials outside and within NASA who expressed some reservations about ESD’s dual hat approach. For example: The former Chairman of the CAIB stated that, even though the ESD programs are still in development, he believes the technical authority should be institutionally protected against the pressures of cost and schedule and added that NASA should never be lulled into dispensing of engineering and safety independence because human spaceflight is an extremely risky enterprise. Both NASA’s Chief Engineer and Chief of S&MA acknowledged there is inherent conflict in the concurrent roles of the dual hats, while also expressing great confidence in the ESD staff now in the dual roles. NASA’s Chief of S&MA indicated that the dual-hat S&MA structure is working well within ESD, but he believes these dual-hatted roles may not necessarily meet the intent of the CAIB’s recommendation because the Board envisioned an independent safety organization completely outside the programs. NASA’s Chief Engineer stated that he believes technical authority should become a separate responsibility and position as ESD moves forward with integration of the three programs and into their operation as a system. As these individuals made clear, ensuring the ESD engineering and S&MA technical authorities remain independent of cost and schedule conflicts is key to human spaceflight success and safety. Along these lines, the ASAP previously conveyed concerns about NASA’s implementation of technical authority that continue to be valid today. In particular, the ASAP stated in a 2013 report that NASA’s technical authority was working at that time in large measure due to the well- qualified, strong personnel that had been assigned to the process. The panel noted, however, that should there be a conflict or weakening of the placement of strong individuals in the technical authority position, this could introduce greater risk into a program. Although a current ASAP official stated she had no concerns with ESD’s present approach to technical authority, the panel’s prior caution remains applicable, and the risk that the ASAP identified earlier could be realized if not mitigated by eliminating the potential for competing interests within the ESD engineering and S&MA positions. NASA is currently concluding an assessment of the implementation of the technical authority role to determine how well that function is working across the agency. According to the official responsible for leading the study, the assessment includes examining the evolution of the technical authority role over the years and whether NASA is spending the right amount of funds for those positions. NASA expects to have recommendations in 2017 on how to improve the technical authority function, but does not expect to address the dual hat construct. A principle of federal internal controls is that an agency should design control activities to achieve objectives and respond to risks, which includes segregation of key duties and responsibilities to reduce the risk of error, misuse, or fraud. By overlapping technical authority and programmatic responsibilities, NASA will continue to run the risk of creating an environment of competing interests for the ESD engineering and S&MA technical authorities. ESD Risk Posture Has Improved, but Key Risk Areas Remain for the Integration Effort Despite the development and integration challenges associated with a new human spaceflight capability, ESD has improved its overall cross- program risk posture over the past 2 years. Nonetheless, it still faces key integration risk areas within software development and verification and validation (V&V). Both are critical to readiness for EM-1 because software acts as the “brain” that ties SLS, Orion, and EGS together in a functioning body, while V&V ensures the integrated body works as expected. The success of these efforts forms the foundation for a launch, no matter the date of EM-1. ESD’s Cross-Program Risk Posture Has Improved We have previously reported on individual SLS, Orion, and EGS program risks that were contributing to potential delays within each program. For example, in July 2016, we found that delays with the European Service Module—which will provide services to the Orion crew module in the form of propulsion, consumables storage, and heat rejection and power—could potentially affect the Orion program’s schedule. Subsequently, in April 2017, we found that those delays had worsened and were contributing to the program likely not making a November 2018 launch readiness date. All three programs continue to manage such individual program risks, which is to be expected of programs of this size and complexity. The programs may choose to retain these risks in their own risk databases or elevate them to ESD to track mitigation steps. A program would elevate a risk to ESD when decisions are needed by ESD management, such as a need for additional resources or requirement changes. Risks with the greatest potential for negative impacts are categorized as top ESD risks. In addition to these individual programs risks that are elevated to ESD, ESD is also responsible for overseeing cross-program risks that affect multiple programs. An example of a cross-program risk is the potential for delayed delivery of data from SLS and Orion to affect the EGS software development schedule. ESD has made progress reducing risks over the last 2 years, from the point of the Design to Sync preliminary design review equivalent for the integrated programs to the Build to Sync critical design review equivalent. As figure 10 illustrates, ESD has reduced its combined total of ESD and cross program risks from 39 to 25 over this period, and reduced the number of high risks from about 49 percent of the total to about 36 percent of the total. The ESD risk system is dynamic, with risks coming into and dropping out of the system over time as development proceeds and risk mitigation is completed. A total of 29 of the 39 risks within the ESD risk portfolio were removed from the register and 15 risks were added to the register between November 2014, prior to Design to Sync, and March 2017, after Build to Sync. Examples of risks removed over this time period include risks associated with late delivery of Orion and SLS ground support equipment hardware to EGS and establishing a management process to identify risks stemming from the programs being at differing points in development. Nine risks remained active in the system over the 2-year period we analyzed, and NASA experienced delays in the length of time it anticipated it would take to complete mitigation of the majority of these nine risks. Three of these nine risks that have remained active in the risk system since before Design to Sync are still classified as high risk; the remaining six are classified as medium risk. Mitigation is an action taken to eliminate or reduce the potential severity of a risk, either by reducing the probability of it occurring, by reducing the level of impact if it does occur, or both. ESD officials indicated a number of reasons why risks could take longer to mitigate. For instance, risks with long-term mitigation strategies may go for extended periods of time without score changes. In addition, ESD may conduct additional risk assessments and determine that certain risks need to be reprioritized over time and that resources should be focused towards higher risks. In addition, some risk mitigation steps are tied to hardware delivery and launch dates, and as those delay, the risk mitigation steps will as well. As illustrated in table 2, we found that six of these nine risks were related to software and V&V and represented some of the primary causes in terms of estimated completion delays. On average, the estimated completion dates for these six risks were delayed about 16 months. In addition, the two V&V risks that have remained active since before Design to Sync were still considered top ESD risks as of March 2017 when we completed this analysis. Software Development Is a Key Risk Area Facing the Integration Effort Software development is one of the top cross-program technical issues facing ESD as the programs approach EM-1. Software is a key enabling technology required to tie the human spaceflight systems together. Specifically, for ESD to achieve EM-1 launch readiness, software developed within each of the programs has to be able to link and communicate with software developed in other programs in order to enable a successful launch. Furthermore, software development continues after hardware development and is often used to help resolve hardware deficiencies discovered during systems integration and test. ESD has defined six critical paths—the path of longest duration through the sequence of activities that determines the program’s earliest completion date—for its programs to reach EM-1, and three are related to software development. These three software critical paths support interaction and communication between the systems the individual programs are developing—SLS to EGS software, Orion to EGS software, and the Integrated Test Laboratory (ITL) facility that supports Orion software and avionics testing as well as some SLS and EGS testing. The other critical paths are development of the Orion crew service module, SLS core stage, and the EGS Mobile Launcher. Because of software’s importance to EM-1 launch readiness, ESD is putting a new method in place to measure how well these software efforts are progressing along their respective critical paths. To that end, it is currently developing a set of “Key Progress Indicators” milestones that will include baseline and forecast dates. Officials indicated that these metrics will allow ESD to better track progress of the critical path software efforts toward EM-1 during the remainder of the system integration and test phase. ESD officials have indicated, however, that identifying and establishing appropriate indicators is taking longer than expected and proving more difficult than anticipated. One of the software testing critical paths, the ITL, has already experienced delays that slipped completion of planned software testing from September 2018 until March 2019, a delay of 6 months. Officials told us that this delay was primarily due to a series of late avionics and software deliveries by the European Space Agency for Orion’s European Service Module. The delay in the Orion testing in turn affects SLS and EGS software testing and integration because those activities are informed by the completion of the Orion software testing. Furthermore, some EGS and SLS software testing scheduled to be conducted within the ITL has been re-planned as a result of the Orion delays. The Orion program indicates that it has taken action to mitigate ITL issues as they arise. For example, the European Service Module avionics and software delivery delay opened a 125-day gap between completion of crew module testing and service module testing. Orion officials indicated that the program had planned to proceed directly into testing of the integrated crew module and service module software and systems, but the integrated testing cannot be conducted until the service module testing is complete. As illustrated by figure 11, to mitigate the impact of the delay, Orion officials indicated that the program filled this gap by rescheduling other activities at the ITL such as software integration testing and dry runs for the three programs. These adjustments narrowed the ITL schedule gap from 125 days to 24 days. The officials stated that they will continue to adjust the schedule to eliminate gaps. The other two software critical paths—SLS to EGS and Orion to EGS software development—are also experiencing software development issues. In July 2016, for example, we found that delays in SLS and Orion requirements development, as well as the programs’ decisions to defer software content to later in development, were delaying EGS’s efforts to develop ground command and control software and increasing cost and schedule. Furthermore, ESD reports show that delays and content deferral in the Orion and SLS software development continue to affect EGS software development and could delay launch readiness. For example, the EGS data throughput risk that both ESD and EGS are tracking is that the ground control system software is currently not designed to process the amount of telemetry it will receive and provide commands to SLS and ground equipment as required during launch operations. EGS officials stated that, if not addressed, the risk is that if there is a SLS or Orion failure, the ground control system software may not display the necessary data to launch operations technicians. EGS officials told us that the reason for the mismatch between the data throughput being sent to the ground control software and how much is it designed to process is that no program was constrained in identifying its data throughput. These officials stated that retrospectively, they should have established an interface control document to manage the process. The officials also stated that the program is taking steps to mitigate this risk, including defining or constraining the data parameters and buying more hardware to increase the amount of data throughput that can be managed, but will not know if the risk is fully mitigated until additional data are received and analyzed during upcoming tests. For example, EGS officials stated that the green run test will provide additional data to help determine if the steps they are taking address this throughput risk. If the program determines the risk is not fully mitigated and additional software redesign is required, it could lead to schedule delays. ESD officials overseeing software development acknowledged that software development for the integrated systems is a difficult task and said they expect to continue to encounter and resolve software development issues during cross-program integration and testing. As we have found in past reviews of NASA and Department of Defense systems, software development is a key risk area during system integration and testing. For example, we found in April 2017 that software delivery delays and development problems with the U.S. Air Force’s F-35 program experienced during system integration and testing were likely to extend that program’s development by 12 months and increase its costs by more than $1.7 billion. Verification and Validation Will Remain Key Risk Area to Monitor as NASA Establishes and Works towards New Launch Readiness Date Verification and validation (V&V) is acknowledged by ESD as a top cross- program integration risk that NASA must monitor as it establishes and works toward a new EM-1 launch readiness date. V&V is a culminating development activity prior to launch for determining whether integrated hardware and software will perform as expected. V&V consists of two equally important aspects: verification is the process for determining whether or not a product fulfills the requirements or specifications established for it at the start of the development phase; and validation is the assessment of a planned or delivered system ability to meet the sponsor’s operational need in the most realistic environment achievable during the course of development or at the end of development. Like software development and testing, V&V is typically complex and made even more so by the need to verify and validate how SLS, Orion, and EGS work together as an integrated system. ESD’s V&V plans for the integrated system have been slow to mature. In March 2016, leading up to ESD’s Build to Sync review, ESD performed an audit of V&V-related documentation for the program CDRs and ESD Build to Sync. The audit found that 54 of 257 auditable areas (21 percent) were not mature enough to meet NASA engineering policy guidance for that point in development. According to ESD documentation, there were several causes of this immaturity, including incomplete documentation and inconsistent requirements across the three programs. NASA officials told us that our review prompted ESD to conduct a follow-up and track the status of these areas. As of June 2017, 53 of the 54 auditable areas were closed, which means these areas are at or have exceeded CDR level of maturity—6 months after Build to Sync was completed. NASA officials indicated that the remaining one auditable area, which is related to the test plan for the integrated communication network, was closed in August 2017. Nevertheless, other potential V&V issues still remain. According to ESD officials, distributing responsibility for V&V across the three programs has created an increased potential for gaps in testing. If gaps are discovered during testing, or if integrated systems do not perform as planned, money and time for modifications to hardware and/or software may be necessary, as well as time for retesting. This could result in delayed launch readiness. As a result, mature V&V plans are needed to ensure there are no gaps in planned testing. ESD officials indicated that a NASA Engineering and Safety Center review of their V&V plans, requested by ESD’s Chief Engineer to address concerns about V&V planning, would help define the path forward for maturing V&V plans. V&V issues add to cost and schedule risk for the program because they may take more time and money to resolve than ESD anticipates. In some cases, they may have a safety impact as well. For example, if the structural models are not sufficiently verified, it increases flight safety risks. Each of the programs bases its individual analyses on the models of the other programs. As a result, any deficiencies discovered in one can have cascading effects through the other systems and programs. We will continue to monitor ESD’s progress mitigating risks as NASA approaches EM-1. Conclusions NASA is at the beginning of the path leading to human exploration of Mars. The first phase along that path, the integration of SLS, Orion, and EGS, is likely to set the stage for the success or failure of the rest of the endeavor. Establishing a cost and schedule baseline for NASA’s second mission is an important initial step in understanding and gaining support for the costs of SLS, Orion, and EGS, not just for that one mission but for the Mars plan overall. NASA’s ongoing refusal to establish this baseline is short-sighted, because EM-2 is part of a larger conversation about the affordability of a crewed mission to Mars. While later stages of the Mars mission are well in the future, getting to that point in time will require a funding commitment from the Congress and other stakeholders. Much of their willingness to make that commitment is likely to be based on the ability to assess the extent to which NASA has met prior goals within predicted cost and schedule targets. Furthermore, as ESD moves SLS, Orion, and EGS from development to integrated operations, its efforts will reach the point when human lives will be placed at risk. Space is a severe and unforgiving environment; the Columbia accident showed the disastrous consequences of mistakes. As the Columbia Accident Investigation Board report made clear, a program’s management approach is an integral part of ensuring that human spaceflight is as safe and successful as possible. The report also characterized independence as key to achieving that safety and success. ESD’s approach, however, tethers independent oversight to program management by vesting key individuals to wear both hats at the same time. As a result, NASA is relying heavily on the personality and capability of those individuals to maintain independence rather than on an institutional process, which diminishes lessons learned from the Columbia accident. Matter for Congressional Consideration We are making the following matter for congressional consideration. Congress should consider requiring the NASA Administrator to direct the Exploration Systems Development organization within the Human Exploration and Operations Mission Directorate to establish separate cost and schedule baselines for work required to support SLS and EGS for Exploration Mission 2 and establish separate cost and schedule baselines for each additional capability that encompass all life cycle costs, to include operations and sustainment. (Matter for Consideration 1) Recommendation for Executive Action We are making the following recommendation to the Exploration Systems Development organization. Exploration Systems Development should no longer dual-hat individuals with both programmatic and technical authority responsibilities. Specifically, the technical authority structure within Exploration Systems Development should be restructured to ensure that technical authorities for the Offices of the Chief Engineer and Safety and Mission Assurance are not fettered with programmatic responsibilities that create an environment of competing interests that may impair their independence. (Recommendation 1) Agency Comments and Our Evaluation NASA provided written comments on a draft of this report. These comments are reprinted in appendix II. NASA also provided technical comments, which were incorporated as appropriate. In responding to a draft of our report, NASA partially concurred with our recommendation that the Exploration Systems Development (ESD) organization should no longer dual-hat individuals with both programmatic and technical authority responsibilities. Specifically, we recommended that the technical authority structure within ESD should be restructured to ensure that technical authorities for the Offices of Chief Engineer and Safety and Mission Assurance are not fettered with programmatic responsibilities that create an environment of competing interests that may impair their independence. In response to this recommendation, NASA stated that it created the technical authority governance structure after the Columbia Accident Investigation Board report and that the dual- hat technical authority structure has been understood and successfully implemented within ESD. NASA recognized, however, that as the program moves from the design and development phase into the integration and test phase, it anticipates that the ESD environment will encounter more technical issues that will, by necessity, need to be quickly evaluated and resolved. NASA asserted that within this changed environment it would be beneficial for the Engineering Technical Authority role to be performed by the Human Exploration and Operations Chief Engineer (who reports to the Office of the Chief Engineer). NASA stated that over the next year or so, it would solicit detailed input from these organizations and determine how to best support the program while managing the transition to integration and test and anticipated closing this recommendation by September 30, 2018. We agree that NASA should solicit detailed input from key organizations within the agency as it transitions away from the dual hat technical authority structure to help ensure successful implementation of a new structure. In order to implement this recommendation, however, NASA needs to assign the technical authority role to a person who does not have programmatic responsibilities to ensure they are independent of responsibilities related to cost and schedule performance. To fulfill this, this person may need to reside outside of the Human Exploration and Operations Mission Directorate and NASA should solicit input from the Office of the Chief Engineer when making this decision to ensure that there are no competing interests for the technical authority. Moreover, in its response, NASA does not address the dual-hat technical authority role for Safety and Mission Assurance. We continue to believe that similar changes for this role would be appropriate as well. Further, in response to this recommendation, NASA makes two statements that require additional context. First, NASA stated that GAO’s recommendation was focused on overall Agency technical authority management. While this review involved meeting with the heads of the Office of Chief Engineer and the Office of Safety and Mission Assurance, the scope of this review and the associated recommendation are limited to ESD. Second, NASA stated “As you found, we agree that having the right personnel in senior leadership positions is essential for a Technical Authority to be successful regardless of how the Technical Authority is implemented.” To clarify, this perspective is attributed to NASA officials in our report and does not represent GAO’s position. We are sending copies of this report to NASA’s Administrator and to appropriate congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology This report assesses (1) the benefits and challenges of the National Aeronautics and Space Administration’s (NASA) approach for integrating and assessing the programmatic and technical readiness of Orion, SLS, and EGS; and (2) the extent to which the Exploration Systems Development (ESD) organization is managing cross-program risks that could affect launch readiness. To assess the benefits and challenges of NASA’s approach for integrating and assessing the programmatic and technical readiness of its current human spaceflight programs relative to other selected programs, we reviewed and analyzed NASA policies governing program and technical integration, including cost, schedule, and risk. We obtained and analyzed ESD implementation plans to assess the role of ESD in cross program integration of the three programs. We reviewed the 2003 Columbia Accident Investigation Board’s Report’s findings and recommendations related to culture and organizational management of human spaceflight programs as well as the Constellation program’s lessons learned report. We reviewed detailed briefings and documentation from Cross-Program Systems Integration and Programmatic and Strategic Integration teams explaining ESD’s approach to programmatic and technical integration, including implementation of systems engineering and integration. We interviewed NASA officials to discuss the benefits and challenges of NASA’s integration approach and their roles and responsibilities in managing and overseeing the integration process. We met with the technical authorities and other representatives from the NASA Office of the Chief Engineer, Office of Safety and Mission Assurance, Crew Health and Safety, addressed cost and budgeting issues with the Chief Financial Officer and discussed and documented their roles in executing and overseeing the ESD programs. We also interviewed outside subject matter experts to gain their insight of ESD’s implementation of NASA’s program management policies on the independent technical authority structure. Additionally, we compared historical budget data from the now- cancelled Constellation program to ESD budget data and quantified systems engineering and integration budget savings through preliminary design review, the point at which the Constellation program was cancelled. In addition, we assessed the scope of NASA’s funding estimates for the second exploration mission and beyond against best practices criteria outlined in GAO’s cost estimating guidebook. We assessed the reliability of the budget data obtained using GAO reliability standards as appropriate. We compared the benefits and challenges of NASA’s integration approach to that of other complex, large-scale government programs, including NASA’s Constellation and the Department of Defense’s Missile Defense Agency programs. To determine the extent to which ESD is managing cross-program risks that could affect launch readiness, we obtained and reviewed NASA and ESD risk management policies; detailed monthly and quarterly briefings; and documentation from Cross-Program Systems Integration and Programmatic and Strategic Integration teams explaining ESD’s approach to identifying, tracking, and mitigating cross-program risks. We reviewed Cross-Program Systems Integration systems engineering and systems integration areas as well as Programmatic and Strategic Integration risks, cost, and schedule to determine what efforts presented the highest risk to cross program cost and schedule. We conducted an analysis of ESD’s risk dataset and the programs’ detailed risk reports, which list program risks and their potential schedule impacts, including mitigation efforts to date. We examined risk report data from Design to Sync to Build to Sync and focused our analyses to identify risks with current mitigation plans to determine if risk mitigation plans are proceeding on schedule. We did not analyze risks that were categorized under “Accept,” “Candidate,” “Research,” “Unknown,” or “Watch” because these risks were not assigned an active mitigation plan by ESD. To assess the reliability of the data, we reviewed related documentation and interviewed knowledgeable agency officials. We determined the data was sufficiently reliable for identifying risks and schedule delays associated with those risks. We examined ESD integrated testing facility schedules to determine the extent to which they can accommodate deviation in ESD’s planned integrated test schedule. We also interviewed program and contractor officials on technical risks, potential impacts, and risk mitigation efforts underway and planned. We conducted this performance audit from August 2016 to October 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the National Aeronautics and Space Administration Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact name above, Molly Traci (Assistant Director), LaTonya Miller, John S. Warren Jr., Tana Davis, Laura Greifner, Roxanna T. Sun, Samuel Woo, Marie P. Ahearn, and Lorraine Ettaro made key contributions to this report.
Why GAO Did This Study NASA is undertaking a trio of closely related programs to continue human space exploration beyond low-Earth orbit. All three programs (SLS, Orion, and EGS) are working toward a launch readiness date of no earlier than October 2019 for the first test flight. Each program is a complex technical and programmatic endeavor. Because all three programs must work together for launch, NASA must integrate the hardware and software from the separate programs into a working system capable of meeting its goals for deep space exploration. The House Committee on Appropriations report accompanying H.R. 2578 included a provision for GAO to assess the progress of NASA's human space exploration programs. This report assesses (1) the benefits and challenges of NASA's approach for integrating these three programs and (2) the extent to which cross-program risks could affect launch readiness. GAO examined NASA policies, the results of design reviews, risk data, and other program documentation and interviewed NASA and other officials. What GAO Found The approach that the National Aeronautics and Space Administration (NASA) is using to integrate its three human spaceflight programs into one system ready for launch offers some benefits, but it also introduces oversight challenges. To manage and integrate the three programs—the Space Launch System (SLS) vehicle; the Orion crew capsule; and supporting ground systems (EGS)—NASA's Exploration Systems Development (ESD) organization is using a more streamlined approach than has been used with other programs, and officials GAO spoke with believe that this approach provides cost savings and greater efficiency. However, GAO found two key challenges to the approach: The approach makes it difficult to assess progress against cost and schedule baselines. SLS and EGS are baselined only to the first test flight. In May 2014, GAO recommended that NASA baseline the programs' cost and schedule beyond the first test flight. NASA has not implemented these recommendations nor does it plan to; hence, it is contractually obligating billions of dollars for capabilities for the second flight and beyond without establishing baselines necessary to measure program performance. The approach has dual-hatted positions, with individuals in two programmatic engineering and safety roles also performing oversight of those areas. As the image below shows, this presents an environment of competing interests. These dual roles subject the technical authorities to cost and schedule pressures that potentially impair their independence. The Columbia Accident Investigation Board found in 2003 that this type of tenuous balance between programmatic and technical pressures was a contributing factor to that Space Shuttle accident. NASA has lowered its overall cross-program risk posture over the past 2 years, but risk areas—related to software development and verification and validation, which are critical to ensuring the integrated body works as expected—remain. For example, delays and content deferral in Orion and SLS software development continue to affect ground systems software development and could delay launch readiness. GAO will continue to monitor these risks. What GAO Recommends Congress should consider directing NASA to establish baselines for SLS and EGS's missions beyond the first test flight. NASA's ESD organization should no longer dual-hat officials with programmatic and technical authority responsibilities. NASA partially concurred with our recommendation and plans to address it in the next year. But NASA did not address the need for the technical authority to be independent from programmatic responsibilities for cost and schedule. GAO continues to believe that this component of the recommendation is critical.
gao_GAO-17-773
gao_GAO-17-773_0
Background DNN Selected Subprograms Within DNN, the work of the four selected subprograms—Nuclear Material Removal, HEU Reactor Conversion, Radiological Security, and International Nuclear Security—focuses on efforts to remove and dispose of excess nuclear material from civilian sites worldwide, convert civilian research reactors to the use of non-weapons-useable nuclear fuel, secure radiological materials at their source in the United States and abroad, and improve the security of weapons-useable nuclear material in key countries. The selected subprograms organize their work in programmatic areas which we refer to as components, and under each component the subprograms manage projects. Table 1 below describes the work of each subprogram and the components in which the subprogram organizes its work scope. Program Management Leading Practices Related to Schedule and Cost Management PMI’s The Standard for Program Management and GAO’s schedule and cost guides identify program management leading practices related to schedule and cost estimating and measuring performance against baselines, as follows: PMI guidelines. According to PMI’s guidelines, programs practice life-cycle management, which involves schedule and financial management throughout the course of the program’s life-cycle phases—program definition, benefits delivery, and closure. In particular, PMI states that in conducting program schedule management, programs use a master schedule that integrates the schedules of program components necessary to achieve the program’s goal. In program financial management, program cost estimates should be clearly defined and should consider the full life- cycle costs of the program. According to PMI, programs should also establish and measure performance against baselines for both schedule and cost. GAO schedule and cost guides. GAO’s schedule and cost guides, which draw from federal organizations and industry, define best practices about the processes needed for the development and management of high-quality and reliable schedule and cost estimates. Similar to PMI’s guidelines, according to the GAO guides, programs should establish and use an integrated master schedule, establish cost estimates that cover the full life cycle of the program, document and define assumptions tailored to the program, incorporate analysis of program risk and uncertainty in schedule and cost estimates, and manage a program’s schedule and cost by measuring against a baseline. Selected DNN Subprograms Generally Do Not Use Selected Leading Practices to Manage Schedule and Cost The four DNN subprograms we chose for review generally do not use selected leading program management practices to manage schedule and cost. Specifically, at the time of our review, none of the subprograms had schedule and cost estimates that encompassed its entire life cycle, although one subprogram planned to develop such estimates for its recently-extended life cycle. In addition, none of the selected subprograms measure their overall schedule and cost performance against baseline estimates. NNSA officials said that the subprograms had not developed schedule and cost estimates that cover their life cycles and did not measure the subprograms against baselines due, in part, to uncertainty in planning scope and schedules that rely on the cooperation of other countries. DNN also does not require subprograms to have such estimates or to measure performance against schedule and cost baselines. Following these practices, however, would provide NNSA managers and other stakeholders more complete information to evaluate how much the subprograms may cost to achieve their goals, the amount of time they may need to achieve these goals, and their actual versus planned performance. According to leading practices, programs should (1) establish a master schedule that integrates the schedules of program components necessary to achieve the program’s goal, such as specified performance to be achieved over a defined life cycle, (2) determine costs that consider the full life-cycle costs of the program, and (3) measure performance against baselines for both schedule and cost. Figure 2 illustrates the extent to which the selected subprograms have established schedule and cost estimates compared to their planned life-cycle completion dates, if any. The Nuclear Material Removal subprogram had schedule and cost estimates that encompassed all three of its subprogram components through the subprogram’s previously planned completion date of fiscal year 2022. However, the subprogram had yet to update its schedule and cost estimate through its new planned completion date of fiscal year 2027, which was established in May 2017. The subprogram did not have readily available information on performance against its former schedule and cost estimates. Specifically: Schedule. As of April 2017, the subprogram’s schedule, which encompassed all three subprogram components, included 52 ongoing and planned projects with estimated completion dates by the end of fiscal year 2022 for most of these projects to reach a goal to remove or disposition a total of 8,466 kilograms of nuclear material. In May 2017, the subprogram extended its life cycle from fiscal year 2022 to fiscal year 2027 but at the time of our review had yet to update its schedule of planned projects to be completed during fiscal years 2023 through 2027. According to NNSA officials, they extended the subprogram’s life cycle in part because certain projects planned to be completed by fiscal year 2022 were delayed and the subprogram’s work was expanded. Cost. The subprogram had a cost estimate for its planned work through fiscal year 2022 but at the time of our review had yet to update its cost estimate for the overall subprogram through its new planned completion date of fiscal year 2027. Specifically, as of June 2017, the subprogram had a cost estimate of about $595 million, according to our analysis of information provided by the subprogram. This estimate covered the planned work scope of all three subprogram components to be completed during fiscal year 2017 through 2022. The subprogram, however, did not have estimated costs for completing work scope planned during fiscal years 2023 through 2027. According to NNSA officials, as of June 2017, they were developing a cost estimate for the remaining years, although the officials did not specify when the cost estimate would be completed. Measuring performance against baselines. The subprogram did not measure its overall performance against schedule and cost baselines. NNSA reported to Congress in July 2014 that the subprogram planned to remove or disposition approximately 3,000 kilograms of nuclear material by fiscal year 2022 at an estimated cost of about $600 million. However, the subprogram did not track information on its performance against the cost estimate. According to NNSA officials, removal projects have too many uncertain costs. Instead, NNSA officials said that they update the subprogram’s life- cycle cost each year as part of the annual planning for the next fiscal year’s budget request. Until the subprogram develops schedule and cost estimates to support the recently revised life-cycle completion date of fiscal year 2027, it does not have the baselines it needs to measure its overall schedule and cost performance. Although the subprogram did not measure its overall performance against established schedule and cost baselines, according to monthly performance reports, the subprogram baselined and measured the schedule performance of individual removal projects by tracking the difference in number of days between forecasted project completion dates and baseline completion dates. However, the subprogram did not have information that integrated project performance information to provide an overall picture of schedule performance for the entire subprogram. HEU Reactor Conversion The HEU Reactor Conversion subprogram had schedule and cost estimates that covered the remaining work scope to complete two of three subprogram components by fiscal year 2033 but not for a third component estimated to be completed in fiscal year 2035. The subprogram also did not measure its overall performance against schedule and cost baselines. Specifically: Schedule. The HEU Reactor Conversion subprogram did not have a schedule for the overall subprogram through completion of its life cycle. Instead, the subprogram had a schedule for all work scope planned for the 5-year FYNSP, which included the schedule for the remaining work to complete one of the three subprogram components–Molybdenum 99 (Mo99) efforts. Beyond the FYNSP planning period, the subprogram has an estimated completion date of fiscal year 2033 for a second component—U.S reactor conversions— and has developed a schedule for completion of the component. For the third subprogram component—international reactor conversions— the subprogram estimates a fiscal year 2035 completion date for its remaining work scope to convert or verify the shutdown of 44 international reactors, but it had not developed a complete schedule to meet that date. Specifically, the subprogram’s schedule was not up- to-date for 22 of the 44 international reactors in the subprogram’s planned work scope to support the estimated fiscal year 2035 completion date for these reactors. Instead, in the subprogram’s schedule, these reactors had estimated completion dates by fiscal year 2030. NNSA officials explained that the schedule was not up-to- date for these reactors because the reactors are in countries where the subprogram cannot currently plan or implement the conversions due to limitations in cooperation with these countries. For example, DNN cannot plan the schedule for conversion of reactors in Russia that are in the subprogram’s scope until the United States and Russia resume joint nuclear security activities that the United States discontinued following Russia’s invasion of Ukraine in 2014. NNSA officials said that the 2035 date is their best judgment of the earliest date when the subprogram could complete the conversions or verify certain reactors’ shutdowns based on the assumption that the United States and Russia may resume nuclear security cooperation in the 2020s. Because of the high degree of uncertainty with this date, the subprogram did not update the schedule to reflect the 2035 date, according to the officials. Appendix II provides tables that list the planned reactor and facility projects in the HEU Reactor Conversion subprogram, their locations, and estimated conversion or shutdown completion dates. Cost. The HEU Reactor Conversion subprogram did not have a life- cycle cost estimate for the overall subprogram, but had overall life- cycle cost estimates for two of the three subprogram components. The subprogram had cost estimates that totaled approximately $1.1 billion through fiscal year 2033 and that included the remaining estimated life-cycle costs for the subprogram’s U.S. reactor conversions component and its Mo99 efforts. For the third component—international reactor conversions—the subprogram only estimated costs for the 5-year FYNSP, not through the estimated completion date for the component of fiscal year 2035. According to NNSA officials, developing a cost estimate that includes all remaining international reactor conversions through 2035 would be challenging because the costs for these projects are highly uncertain and vary depending on the willingness of each country to cooperate as well as the unique technical, regulatory, and other factors that vary for each reactor in each country. The subprogram, however, had established estimated life-cycle budgets for completing the conversion or verifying the shutdown of each reactor in its work scope, which could be used, along with other information, to develop a cost estimate for the subprogram component. Measuring performance against baselines. The subprogram did not measure overall subprogram performance against schedule and cost baselines. Specifically, as mentioned above, the subprogram did not have schedule and cost estimates for the overall subprogram that it could use to establish baselines to measure the performance of the overall subprogram. Although the subprogram had life-cycle estimates for its U.S. reactors and Mo99 components, the subprogram did not use these estimates as baselines to measure the overall subprogram components’ performance. The subprogram measured schedule performance of individual projects under its three components against baselines by tracking the difference in number of days and months between forecasted project completion dates and baseline completion dates. However, it did not integrate and roll up the project information to provide an assessment of its overall schedule performance. In addition, the subprogram baselined and measured cost performance of the U.S. High Performance Research Reactor project—which constitutes six of the seven reactors under its U.S. reactor conversions component—by tracking changes in the project’s estimated life-cycle cost. However, the subprogram did not have similar information that tracked changes in cost estimates of other projects under its three components. Radiological Security The Radiological Security subprogram did not have schedule and cost estimates for three components through the subprogram’s planned completion date in fiscal year 2033. The subprogram also did not measure overall subprogram performance against schedule and cost baselines. Specifically: Schedule. The subprogram has an estimated completion date of fiscal year 2033 but did not have an overall schedule that covered its three components for meeting the 2033 date. Instead, the subprogram had a schedule that covered work to be completed under its three components during the 5-year FYNSP (fiscal years 2017 through 2021). Specifically, for two of the three subprogram components— radiological source removal and nonradioisotopic technologies—the subprogram has not established specific work scope and schedules beyond fiscal year 2021 because of uncertainty about the future. For example, according to the subprogram’s director, planning the adoption of nonradioisotopic technologies is uncertain because the timing of when such technologies can be adopted depends, in part, on regulations and international laws, making it challenging for the subprogram to define the scope of work. For the third subprogram component—radiological source protection—the subprogram has an estimated completion date of fiscal year 2033 to reach a total target to secure 4,394 buildings in its inventory of sites worldwide with high- priority radiological sources. However, the subprogram had not developed a schedule of specific projects to be completed beyond the 5-year FYNSP to meet that date and target. NNSA officials said that they are often uncertain when a project will be able to start because it depends greatly on circumstances in each country. Appendix III provides the Radiological Security subprogram’s planned work scope for the radiological source protection component from fiscal years 2017 through 2033. Cost. The Radiological Security subprogram did not have a life-cycle cost estimate for the overall subprogram through its estimated completion date of fiscal year 2033. Specifically, the subprogram had a cost estimate of about $849 million for all three components covering the 5-year FYNSP. However, for two of the three subprogram components—radiological source removal and nonradioisotopic technologies—the subprogram had not developed cost estimates beyond the 5-year FYNSP because, as mentioned above, it had not developed work scope for these components in the out-years. For example, according to the subprogram’s director, the subprogram’s radiological source removal component depends on the voluntary participation of users of radiological sources that register their sources with the subprogram. Therefore, the subprogram cannot estimate the number of sources to be removed in out-years. For the third subprogram component—radiological protection—the subprogram had assumed a stable budget to complete its target to secure 4,394 buildings by fiscal year 2033. However, according to the director of the subprogram, this budget assumption was not intended to be a reliable life-cycle cost estimate. Measuring performance against baselines. As mentioned above, the subprogram did not have schedule and cost estimates for the overall subprogram needed to establish baselines to measure their overall performance. The subprogram, however, baselined and measured the schedule performance of individual projects under its three components by tracking the difference in number of days between forecasted project completion dates and baseline completion dates. The subprogram, however, did not integrate and roll up the project schedule performance information to provide performance information for the overall subprogram. International Nuclear Security The International Nuclear Security subprogram maintained schedule and cost estimates for the 5-year FYNSP (fiscal years 2017 through 2021) but did not have schedule and cost estimates for work scope in the years beyond the FYNSP. In addition, the subprogram did not measure overall performance against baselines. Specifically: Schedule. The International Nuclear Security subprogram had not established a life-cycle schedule for the overall subprogram or its two component efforts, as it had not identified specific work scope or end- point targets beyond fiscal year 2021 and considers its mission to be enduring (i.e. without an end-date). Instead, the subprogram had only estimated a schedule for work scope in individual countries during the 5-year FYNSP. According to the subprogram director, the subprogram is expected to operate indefinitely and continue as long as nuclear materials exist to improve security in countries possessing such materials. However, the subprogram had not planned project-specific work scope in years beyond the FYNSP because, according to the subprogram director, it is difficult to estimate the subprogram’s likely level of foreign counterpart engagement in individual countries beyond 5 years. Cost. Because it has not identified out-year work scope, the International Nuclear Security subprogram did not have an overall life- cycle cost estimate and only had an estimate of about $530 million for the work to be completed during the 5-year FYNSP period. According to NNSA officials, they have not developed a cost estimate for work scope in the years beyond the FYNSP because assumptions about future work will likely change due to the uncertainty in relationships with partner countries. Measuring performance against baselines. The International Nuclear Security subprogram did not measure performance of the subprogram against schedule and cost baselines. Specifically, as mentioned above, the subprogram did not have the schedule and cost estimates for the subprogram’s life cycle beyond fiscal year 2021 needed to establish baselines to measure its overall performance. In addition, the subprogram did not use its 5-year FYNSP estimates as baselines to measure performance. Instead, the subprogram updates the FYNSP estimates each year in planning the next fiscal year’s budget request. Moreover, unlike the other three subprograms, the International Nuclear Security subprogram did not have project schedule baseline information that could be integrated and rolled up to provide information on the performance of the overall subprogram. In general, NNSA officials explained that uncertainty in planning the selected subprograms’ work scope or schedules, particularly for components with projects that rely on the cooperation of foreign countries, was among the reasons they did not have schedule and cost estimates that covered the subprograms’ life cycles or that went beyond the 5-year required planning period. In addition, according to these officials, DNN senior management does not require subprograms to establish schedule and cost estimates that cover the entire subprogram life cycle and to use these estimates as baselines to measure subprogram performance. However, uncertainty should not prevent these subprograms from establishing more complete or longer-term estimates to account for the time and resources they need to achieve their goals. As mentioned above, without such estimates, the subprograms do not have the baseline information they need to track their performance. According to leading practices, developing reliable schedule and cost estimates can be achieved by following steps that address data limitations and risks and uncertainties for a program. For example, according to the GAO schedule guide, a reliable schedule should reflect all of a program’s activities and recognize that uncertainties and unknown factors in schedule estimates can stem from, among other things, data limitations. In addition, according to the GAO cost guide, the cost-estimating process involves defining and documenting assumptions that are tailored to the specific program, such as about the program’s life-cycle phases, political issues, or technology development. Assumptions should be based on historical data to minimize uncertainty and risk. These same assumptions should also be used to develop the program schedule. For management to make good decisions, the program estimate must reflect the degree of uncertainty so that a level of confidence can be given about the estimate. Accordingly, because assumptions defined for a particular program’s schedule and cost estimate can vary, they should always be inputs to the program’s risk analyses of cost and schedule. Programs use different methods to quantify uncertainty and risk in developing a schedule or cost estimate. DOE’s cost estimating guide describes approaches for programs to incorporate risk and uncertainty in cost estimates such as the use of lower- and upper-bound cost ranges that are developed based on risk analysis. Other NNSA programs use these approaches in developing schedule and cost estimates for highly uncertain, long-term program plans. In particular, NNSA’s Office of Defense Programs develops and reports high- and low-range cost estimates for elements of NNSA’s nuclear weapons modernization programs in part to account for the uncertainty in these long-term program estimates. As mentioned above, such estimates would provide NNSA managers and other stakeholders information to help evaluate resources and compare the costs and benefits of different programs and priorities. Because the selected subprograms do not measure their overall schedule and cost performance against baselines, NNSA managers, stakeholders, and Congress have incomplete information about these subprograms’ actual-versus-planned schedule and cost performance over their duration and are, therefore, at risk of being unable to assess when a subprogram is likely to be completed or whether it will cost more or less than planned. DNN’s Program Management Policy Includes Some Leading Practices, but Does Not Address Life-Cycle Schedule and Cost Management DNN’s Revised Policy Includes Leading Practices on Risk and Quality Management DNN’s 2017 revised policy includes new sections that address leading practices on risk and quality management that all DNN programs and subprograms should follow. NNSA officials said they added these sections based on their review of leading practices in PMI’s The Standard for Program Management and GAO’s Standards for Internal Control in the Federal Government to ensure these leading practices were incorporated and required for DNN programs. Risk management. According to leading practices on risk management, programs should have processes to manage risks, including processes to identify, assess, and respond to risks. In the revised DNN policy, under a new section on risk management, all DNN programs and subprograms are required to prepare risk management plans to help identify, analyze, handle, and monitor risk. For example, a DNN subprogram may identify the risk of schedule slippage due to political constraints in working with foreign countries and could incorporate and monitor that risk in planning. Quality management. According to program management leading practices on quality management, program quality should be continuously monitored. A new DNN policy section on continual improvement requires DNN programs and subprograms to plan and implement methods, such as program evaluations and management assessments, in order to monitor and improve processes. For example, a DNN subprogram may use an independent review by the NNSA Office of Management and Budget to help improve its program management processes, such as how it tracks cost, scope, and schedule. The revised policy also outlines steps for corrective actions to be taken when noncompliance is detected. These steps range from determining the cause of noncompliance to reviewing the effectiveness of corrective actions taken. These new sections added requirements for DNN program management that were not previously documented. For example, in the prior policy, risk management was not a requirement for DNN programs and subprograms. In addition, NNSA officials said that they added the continual improvement section to the revised policy after reviewing PMI’s practices on quality assurance, which they believed would clarify responsibilities regarding management assessments and independent reviews. The Revised DNN Policy Does Not Include Leading Practices on Life-Cycle Schedule and Cost Management The revised DNN policy does not address or require leading practices on life-cycle schedule and cost management for DNN programs or subprograms. Specifically, the revised policy does not outline requirements for programs or subprograms to establish life-cycle cost estimates or measure performance against schedule or cost baselines. Instead, the revised policy provides requirements on schedule and cost management limited to the NNSA budgeting process covering the 5-year FYNSP. For example, according to the revised DNN policy, programs and subprograms must conduct program management activities, such as budget formulation, in alignment with anticipated resources in the FYNSP. Additionally, the policy requires programs and subprograms to establish performance measurement data and track cost or schedule performance, but only within the FYNSP. According to leading practices, life-cycle management is important to program management and includes schedule and cost management activities that span the duration of the program. According to PMI, all programs, regardless of length, have life cycles; furthermore, leading practices indicate that activities related to managing the schedule, cost, and scope of a program should be conducted for the life of the program. For example, leading practices call for calculating cost estimates as close to the beginning of a work effort as possible that consider the full program life cycle, and then documenting this baseline to measure performance. According to NNSA officials, the revised DNN policy does not include requirements to practice life-cycle management, including life-cycle schedule and cost management, because officials determined that life- cycle management did not apply to some DNN programs that NNSA officials believe are enduring or continuous. For example, as mentioned above, the director of the International Nuclear Security subprogram said that the subprogram will phase out of certain areas or reduce engagement with certain countries in the future but that it is expected to continue as long as nuclear materials exist and will work to improve security in countries possessing such materials. We disagree that life-cycle program management does not apply to programs or subprograms that may have an enduring mission. Managers need to make informed decisions about whether a program is affordable within the agency’s portfolio. NNSA and DNN should be able to compare DNN’s various programs’ requirements several years beyond its 5-year planning period. According to the GAO cost guide, in developing estimates, programs should define assumptions tailored to the program, such as assumptions about the program’s life-cycle phases. For example, the International Nuclear Security subprogram could take steps to define end-point targets for when it may phase out work in certain areas or countries in the future. In addition, according to the GAO schedule guide, a comprehensive schedule should reflect all of a program’s activities and recognize that uncertainties and unknown factors in schedule estimates can stem from, among other things, data limitations. Moreover, because assumptions themselves can vary, they should always be inputs to program risk analyses of cost and schedule. According to NNSA officials, although the revised policy does not include requirements for life-cycle cost estimating, DNN programs could address this in their individual program management plans. NNSA officials stated that these program management plans for programs and subprograms should be detailed enough to also provide information on how the program will track progress, including by identifying changes to the planned schedule. However, the revised DNN policy does not clearly require DNN programs or subprograms to have program management plans, nor does it specify elements of such plans. Specifically, the revised DNN policy requires each program to develop “program management documentation” that identifies program scope, schedule, and cost during the fiscal year and operating procedures for the fiscal year, but it does not outline similar requirements for the program’s life cycle. In addition, the revised policy does not specify requirements or guidance, such as on cost estimation, for what programs or subprograms are to include in the program management documentation. In contrast, PMI indicates that programs should develop a program management plan that includes plans for program financial management, schedule management, and scope management for all phases of the program’s life cycle. According to NNSA officials, the revised DNN policy is the only directive or documentation that spells out what is needed or required to be included in a program management plan. Although the revised DNN policy does not clearly require DNN programs or subprograms to have program management plans, some DNN programs have developed or are developing such plans. For example, the Global Material Security program, which oversees the Radiological Security and International Nuclear Security subprograms, issued a new program management plan in April 2017. The Global Material Security program management plan requires that each subprogram maintain a 5- year budget for the FYNSP with cost estimates, but it does not require or provide guidance on developing life-cycle schedule or cost estimates. NNSA officials said that DNN underwent a major reorganization of its programs in January 2015, and some of the new program offices are still preparing their program management plans. For example, the Material Management and Minimization program that oversees the Nuclear Material Removal and HEU Reactor Conversion subprograms is still developing its program management plan, according to NNSA officials. In addition, the four selected subprograms had various documented plans, but none fully addressed life-cycle schedule and cost management. Nuclear Material Removal. The subprogram did not have a current program management plan that had been updated since the 2015 reorganization of DNN but instead relied on an older plan that covered a different scope than the scope of the current subprogram. HEU Reactor Conversion. The subprogram did not have a program management plan for the overall subprogram. Instead, the subprogram had project execution plans for its U.S. reactor conversion projects and its Mo99 projects and relied on an outdated document for its international reactor conversion projects. Radiological Security. The subprogram had a program management plan that included requirements for the use of project life-cycle baselines and for conducting cost estimation for the 5-year FYNSP. However, the plan had no requirement for developing a cost estimate for the life cycle of the subprogram and for using such an estimate to measure performance of the overall subprogram. International Nuclear Security. The subprogram had a program management plan that required cost estimating for 1 fiscal year. However, the plan did not include requirements for life-cycle estimates and for using initial or updated baselines to measure performance. NNSA subprogram officials said that they do not have readily available life-cycle cost estimates and baseline measurement data in part because they are not asked to provide it. For example, NNSA officials from the HEU Reactor Conversion subprogram said that they did not have sufficient staff to track performance against initial baselines because it was not a priority for management, although it would be possible to do so if required. One of the stated goals of the revised DNN policy is to facilitate DNN-wide implementation of methods for programs and subprograms to monitor, measure, and improve management processes. However, because the policy does not require more complete information from DNN programs and subprograms on their cost, schedule, and performance against baselines—consistent with leading practices—it is not clear that this policy goal can be achieved. Conclusions When organizations apply leading program management practices—such as establishing schedules and cost estimates covering their planned life cycles and measuring performance against such baselines—they may be able to enhance their chances of achieving success across a range of programs. However, the four selected DNN subprograms are generally not applying these selected leading practices for life-cycle program schedule and cost management, due in part to the uncertainty and risks in working with international partners. However, methods and approaches exist that allow programs to account for uncertainty and risk in developing schedule and cost estimates for their planned scope of work. Furthermore, while the revised DNN program management policy has incorporated some leading practices, it does not include requirements and guidance for DNN programs and subprograms to practice life-cycle schedule and cost estimating and does not require program management plans that could be the vehicle for DNN programs and subprograms to specify the use of such estimates. Updating the DNN program management policy to include requirements for DNN programs and subprograms to follow leading practices for life-cycle program management would help NNSA ensure that managers, stakeholders, and Congress have better information on how much DNN programs and subprograms may cost to achieve their goals, the amount of time they may need to achieve these goals, and how efficiently and effectively they are actually being executed compared to plans. Recommendation for Executive Action The NNSA Deputy Administrator for DNN should revise the DNN program management policy to require DNN programs and subprograms to follow life-cycle program management. These requirements should include development of schedule and cost estimates that cover the life cycle of DNN programs and subprograms, use of methods to account for uncertainty and risk in such estimates, use of cost and schedule baselines to measure performance over program and subprogram life cycles, and development of program management plans. (Recommendation 1) Agency Comments and Our Evaluation We provided NNSA with a draft of this report for its review and comment. In written comments, which are summarized below and reproduced in appendix IV, NNSA neither agreed nor disagreed with our recommendation to revise the DNN program management policy to require DNN programs and subprograms to follow life-cycle program management. However, NNSA stated that it plans to take action in response to the recommendation. In general, NNSA stated that DNN will update its program management policy to formally document current practice and clarify expectations for addressing uncertainty. Specifically, NNSA said it will update the policy to: (1) reflect that life-cycle cost and schedule management should be applied at the project or subprogram level where appropriate, considering the extent of uncertainty impacting scope, potential timelines, and executability; (2) define the methodologies to (a) account for uncertainties where applying these techniques would result in a reasonable range of estimates that would be useful for planning and scheduling purposes or (b) document risk and track actions to reduce uncertainty where applicable; (3) address expectations for assessing cost and schedule performance, commensurate with the level of certainty present at baselining; and (4) address requirements for documenting program management plans. Although we acknowledge NNSA’s plan to update its policy, we have concerns regarding whether its proposed actions will ensure that DNN programs and subprograms effectively follow leading practices for life- cycle schedule and cost management in the future. First, we do not believe that updating the DNN program management policy to formally document current program management practice addresses our recommendation. NNSA’s response suggests that its update to the policy is intended to reflect current DNN program management practices rather than signal a need for corrective action to address the DNN program management limitations we identified. Specifically, as we stated in our report, none of the four subprograms we reviewed had schedule and cost estimates that encompassed the entire life cycle, although one subprogram planned to develop such estimates for its recently-extended life cycle. In addition, NNSA’s proposed update to the DNN program management policy to reflect life-cycle schedule and cost management “where appropriate” is vague, and may give programs and subprograms too much discretion to avoid the requirement. To have an effective requirement on life-cycle program management and to be responsive to our recommendation, NNSA will need to clearly define the criteria for when a program should be exempt from a requirement to follow life-cycle program management. Finally, the meaning of NNSA’s proposed update to the policy to address expectations for assessing cost and schedule performance, commensurate with the level of certainty present at baselining is unclear. Specifically, it is unclear whether NNSA plans to require that DNN subprograms use cost and schedule baselines to measure performance, or whether it plans to exempt programs or subprograms from such practices based on unstated expectations. As we stated in our report, none of the subprograms we reviewed measured their overall schedule and cost performance against baseline estimates. To ensure that DNN subprograms take steps to measure schedule and cost performance against baselines and to be responsive to our recommendation, NNSA will need to define clear expectations for DNN programs and subprograms to follow. NNSA also provided general comments in its written comments regarding DNN program management. First, NNSA commented that DNN currently implements elements of life- cycle program management where appropriate and reasonable. However, according to NNSA, the majority of its international activities operate with an unusually high level of uncertainty regarding potential international cooperation and with limited information on international operations to understand the scope of work required to support useful planning and estimating. In NNSA’s view, the high uncertainty would result in range estimates so broad as to serve no useful purpose, and there is no appreciable cost-benefit to expending resources on such calculations. We recognize that organizations need flexibility to determine when it is appropriate and useful to apply leading practices on life-cycle program management. However, as noted in our report, managers need to make informed decisions about whether a program is affordable within the agency’s portfolio. Without more complete schedule and cost information on DNN subprograms, NNSA managers and other stakeholders have degraded information on the elements of DNN’s portfolio, which may limit their ability to assess and justify the affordability of long-term plans. If NNSA believes that some of DNN’s planned international work scope is too uncertain for subprograms to develop estimates of schedule and cost that cover their life cycles, then NNSA should evaluate whether it is appropriate to identify such work scope in DNN’s long-term plans at all. Second, NNSA commented that no specific requirement exists for DNN programs and subprograms to implement life-cycle cost estimates, and that DNN complies with current requirements. NNSA also commented that the proper application of leading practices recognizes that cost- benefits, as well as the potential usefulness and reliability of estimates, are important considerations. In instances in which uncertainty is extremely high, NNSA stated that focus shifts to disclosure of risks, and the establishment and tracking of actions to reduce the level of uncertainty. According to NNSA’s comments, DNN discloses risks and tracks actions to reduce the level of uncertainty extensively, and this was reflected in the most recent update to the DNN program management policy with the addition of a new section on risk management. NNSA also stated that as uncertainty is reduced, then other principles can be applied where appropriate. We stated in our report that no specific requirement exists for DNN programs and subprograms to implement life-cycle cost estimates. Specifically, we noted that the DNN policy required that program management functions be conducted over the 5-year FYNSP. Therefore, we agree that the DNN subprograms we chose to review complied with current requirements. However, our review was not focused on compliance with requirements but rather on the use of leading or good program management practices. We also noted that NNSA’s stated objectives for the DNN policy include establishing a DNN-wide policy that incorporates leading practices for program management and that facilitates the implementation of methods for programs and subprograms to monitor, measure, analyze, and improve management processes. Leading practices on life-cycle program management are important for an organization to successfully plan the resources it needs to achieve its goals and assess its performance in doing so. DNN’s revised policy did not acknowledge management of the program life-cycle as an essential program management function and did not include any requirements on leading practices on life-cycle schedule and cost management. We agree that risk management processes should be used to monitor risks and track actions to reduce uncertainty. As we stated in our report, the revised DNN policy included a new section on risk management under which all DNN programs and subprograms will be required to prepare risk management plans to help identify, analyze, handle, and monitor risk. However, the new section did not include criteria for DNN subprograms to follow when uncertainty related to risks being monitored is low enough to allow a subprogram to develop life-cycle schedule and cost estimates. We are sending copies of this report to the appropriate congressional committees, the NNSA Administrator, the NNSA Deputy Administrator for Defense Nuclear Nonproliferation, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or oakleys@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology This report examines the extent to which (1) selected subprograms within the National Nuclear Security Administration’s (NNSA) Office of Defense Nuclear Nonproliferation (DNN) use program management leading practices to manage schedule and cost, and (2) DNN has incorporated program management leading practices in its revised program management policy. To conduct this work, we reviewed 4 selected DNN subprograms. DNN has 4 major programs that manage a total of 13 subprograms (a subprogram is a program managed as part of another program). Specifically, we selected the Nuclear Material Removal and Highly Enriched Uranium (HEU) Reactor Conversion subprograms, which DNN manages under its Material Management and Minimization program. In addition, we selected the Radiological Security and International Nuclear Security subprograms, which DNN manages under its Global Material Security program. We selected these subprograms for review because they had defined start dates, end dates, and/or work scope indicating that they had project-like aspects. These subprograms organize their work in programmatic areas which we refer to as components and under each component the subprograms manage various types of projects, such as projects to remove nuclear material from civilian sites worldwide. We also selected the 4 subprograms because they were not the subject of other ongoing or recently completed GAO reviews. The information we obtained from these subprograms is not generalizable, but we believe that we obtained important insights into DNN’s cost and schedule management of these subprograms. To examine the extent to which the selected DNN subprograms use program management leading practices to manage cost and schedule, we identified selected leading practices by the Project Management Institute (PMI) in The Standard for Program Management and by GAO in its schedule and cost guides. The selected leading practices we identified were the use of a master schedule necessary to achieve a program’s goals, cost estimates that cover the full life-cycle of a program, and schedule and cost baselines to measure performance. We collected and reviewed subprogram planning documents, monthly performance reports, and spreadsheet data on work scope, historical costs, schedules and cost estimates established by the subprograms, and their use of project baselines to measure performance. We also reviewed information the subprograms reported in NNSA’s fiscal year 2017 and 2018 congressional budget justifications. We also interviewed NNSA officials and their contractors who manage the program management information system used by 3 of the 4 subprograms to manage schedule and cost information to understand its capabilities. We interviewed NNSA officials who manage the selected DNN subprograms about the use of these practices and their views on challenges or limitations in using them. We also interviewed representatives at Argonne National Laboratory and Pacific Northwest National Laboratory, which operate projects for the subprograms, to identify how projects develop schedule and cost estimates and pass information on to the subprograms. To assess the reliability of the schedule and cost estimates on the selected subprograms, we interviewed NNSA officials and national laboratory contractors who were knowledgeable about the process followed to develop and update the estimates and the program management information systems used to manage the schedule and cost information and generate reports. We determined that the data were sufficiently reliable for our purposes, which were to report the subprograms’ estimated schedule completion dates and cost estimates, as well as report the fiscal years and subprogram components and projects covered by the subprogram schedule and cost estimates. To examine the extent to which DNN has incorporated leading practices into its revised program management policy, we reviewed DNN’s revised program management policy approved in February 2017. We compared the revised policy to the 2005 version to identify the changes included in the revised policy. We reviewed program management leading practices by PMI in The Standard for Program Management and by GAO in its schedule and cost guides and federal internal control standards. For example, we considered the applicable leading practices on schedule and cost management identified above as well as other practices such as those on risk management, quality management, and development of program management plans. We compared these practices to DNN’s requirements and guidance contained in the revised DNN policy. We interviewed NNSA officials about the development of the new policy and their views on the reasons specific leading practices were included in the revised policy and others were not, as well as challenges DNN’s programs and subprograms face in managing program schedule and cost. We also reviewed program management plans for the 4 selected subprograms and the major programs under which these subprograms operate. We then interviewed NNSA officials from the selected subprograms to determine their involvement in developing the revised DNN program management policy and the status of individual program management plans that were under development at the time of our review. We conducted this performance audit from June 2016 to September 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Scope and Completion Dates for the Highly Enriched Uranium Reactor Conversion Subprogram The Office of Defense Nuclear Nonproliferation’s Highly Enriched Uranium (HEU) Reactor Conversion subprogram consists of three components: (1) U.S. research reactor conversions, (2) international research reactor conversions, and (3) Molybdenum 99 (Mo99) efforts, which include international Mo99 isotope production reactor conversions and projects to establish new U.S. non-HEU Mo99 production facilities. The subprogram’s current goal is to convert or verify shutdown of 156 HEU reactors and isotope production facilities and to support the establishment of a domestic, non-HEU-based Mo99 production capability. Tables 2 through 4 below list the U.S. reactor conversions, international reactor conversions or shutdowns, and Mo99 projects in the HEU Reactor Conversion subprogram’s planned scope of work, for each of the subprogram’s three components, as of July 2017. Appendix III: Scope and Completion Dates for the Radiological Security Subprogram’s Source Protection Component The Office of Defense Nuclear Nonproliferation’s Radiological Security subprogram’s current goal for the radiological source protection component is to upgrade security in 4,394 buildings worldwide by fiscal year 2033. Table 5 shows the estimated number of buildings to be completed each year as of June 2017. Appendix IV: Comments from the National Nuclear Security Administration Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Shelby S. Oakley, (202) 512-3841 or oakleys@gao.gov. Staff Acknowledgments In addition to the individual named above, William E. Hoehn (Assistant Director), Natalie M. Block, R. Scott Fletcher, Brian M. Friedman, Cindy Gilbert, Jason T. Lee, TyAnn Lee, Duc Ngo, Jeanette Soares, Sheryl Stein, and Sara Sullivan made key contributions to this report.
Why GAO Did This Study The threat posed by the proliferation of nuclear and radiological weapons remains a pressing national security challenge. DNN implements nuclear nonproliferation programs worldwide. To carry out its mission, for fiscal year 2018 DNN requested an appropriation of about $1.5 billion for its 4 major programs and their 13 subprograms. A House Armed Services Committee report, accompanying a bill for the National Defense Authorization Act for Fiscal Year 2017, included a provision for GAO to review and assess DNN's project and program management processes and systems. GAO's report examines the extent to which (1) selected DNN subprograms use program management leading practices to manage schedule and cost (2) DNN has incorporated leading practices in its revised program management policy. GAO selected 4 DNN subprograms to review that had defined end dates and/or work scope and that GAO had not recently examined. GAO reviewed documentation on DNN and NNSA's program management policies and practices; reviewed selected leading practices published by PMI and GAO; and interviewed agency officials. What GAO Found The 4 selected subprograms from the National Nuclear Security Administration's (NNSA) Office of Defense Nuclear Nonproliferation (DNN) GAO reviewed generally do not use selected program management leading practices to manage schedule and cost. According to generally recognized leading practices from the Project Management Institute (PMI) and GAO, programs should (1) establish schedules necessary to achieve the program's goal, (2) establish life-cycle cost estimates, and (3) measure performance against schedule and cost baselines. However, none of the DNN subprograms have schedule and cost estimates covering their planned life cycles and none measure performance against schedule and cost baselines. The following figure illustrates the extent to which the selected subprograms have established schedule and cost estimates compared to their planned life cycles. NNSA officials said that the subprograms do not have schedules and cost estimates that cover their life cycles and do not measure performance against baselines, in part, because DNN management does not require such estimates or baseline measurements. The lack of a requirement is consistent with the limitations in DNN's revised program management policy, which does not address leading practices on establishing schedule estimates, estimating life-cycle costs, and measuring against such baselines. According to leading practices, in developing schedule and cost estimates a program should define assumptions tailored to the program such as its life-cycle phases. Updating the DNN policy to include requirements and guidance on cost estimating and tracking performance against schedule and cost baselines could help ensure that NNSA managers and Congress have better information on how much DNN programs and subprograms may cost, the time they may need to achieve their goals, and how effectively they are being executed compared to plans. What GAO Recommends GAO recommends that DNN revise its program management policy to require DNN programs and subprograms to follow life-cycle program management, such as requiring life-cycle estimates and measuring against baselines. NNSA neither agreed nor disagreed with the recommendation but plans to take action to revise its policy.
gao_GAO-18-54
gao_GAO-18-54_0
Background Check-off programs are designed to expand the market for a given agricultural commodity, such as eggs, pork, or highbush blueberries, through generic promotion, research, and consumer and industry information. A check-off program is meant to expand the demand for a commodity rather than for any particular brand or producer. Although some state, regional, and local check-off programs that have existed for over 70 years may be voluntary, federal programs are mandatory. Many commodity groups prefer mandatory programs to address the free rider problem—that is, producers, handlers, processors, importers, or others in the marketing chain who do not pay into a check-off program but benefit economically from voluntary programs that others have funded. After Congress authorized the Cotton Research and Promotion Act of 1966, the first federally mandated agricultural check-off program and board—for cotton—was created. Over the next three decades, Congress authorized the creation of an additional 11 commodity programs and their respective boards. The 12 programs and boards created under the authority of individual stand-alone legislation adhere to the specific requirements as set forth in their respective authorizing legislation. The passage of the Commodity Promotion, Research and Information Act of 1996 (generic legislation) gave USDA the authority to establish additional commodity check-off programs and boards. Since then, 10 additional boards were created based on this generic legislation. Those boards are subject to the requirements set forth in the generic legislation. (See table 1 for the year established and authorizing legislation for all 22 check-off programs.) To create a check-off program, industry groups first identify the need for such a program and then negotiate among themselves to agree on a basic program framework. The framework includes the rate of assessment and the various program activities to be undertaken, such as promotion, advertising, research, and providing information to consumers and industry. Additionally, each industry proposes regulations to USDA for the structure of the board that will carry out these activities. Because each industry has unique characteristics, a different board structure is appropriate for each check-off program. The boards vary in size, geographic representation, and types of individuals who are board members—that is, producers, processors, handlers, importers, public representatives, or others in the marketing chain. USDA, in consultation with the industry, then develops regulations to define how the program will be operated, how the funds will be collected, and how compliance with the authorizing legislation will be maintained, among other things. The check-off programs must be approved by a majority of producers— and in some cases processors, importers, and handler—subject to the assessments. To gain approval, a referendum must be held either before check-off program operations begin within some specified time after assessments are first collected, depending on the authorizing legislation. To fund a check-off program, producers, handlers, processors, importers, or others in the marketing chain are assessed for each unit of the commodity sold, produced, or imported. For example, for each 30-dozen cases of eggs sold, a producer is assessed $0.10. These funds go to the American Egg Board. The boards are to use assessments for the research, promotion, and consumer and industry information activities as well as for reimbursing AMS for its oversight costs. In 2016, total assessments collected for the 22 check-off programs ranged from $0.6 million for the popcorn check-off program to $332.1 million for the dairy check-off program (see table 2). AMS’s Oversight and Past Recommendations To facilitate oversight, AMS breaks the 22 check-off programs into four of the agency’s commodity areas: (1) Cotton and Tobacco—the cotton check-off program; (2) Dairy—the dairy and fluid milk check-off programs; (3) Livestock, Poultry, and Seed—the beef, egg, lamb, pork, sorghum, and soybean check-off programs; and (4) Specialty Crops—the Christmas tree, Hass avocado, highbush blueberry, honey, mango, mushroom, paper and packaging, peanut, popcorn, potato, processed raspberries, softwood lumber, and watermelon programs. AMS has a functional committee for the check-off programs, which comprises a chair and the deputy administrators from the four AMS commodity areas and meets quarterly. The functional committee reports to the AMS Associate Administrator and was established to increase coordination and promote best practices and consistency across the 22 check-off programs. Additionally, the four commodity area directors and other senior agency officials meet weekly to discuss any issues that have arisen and to discuss any necessary policy changes. AMS marketing specialists are responsible for the day-to-day oversight of the check-off boards and for ensuring that board decisions and operations are carried out in accordance with applicable legislation and regulations. Each check-off program has a designated AMS marketing specialist serving as the primary overseer of all check-off program activities. (Fig. 1 shows AMS’s oversight structure for the check-off programs.) As part of their oversight duties, marketing specialists review and approve board budgets, contracts, promotional activities, board policies, and bylaws, among other activities. Every 3 years, marketing specialists also are to conduct management reviews that assess each of the 22 check-off boards’ internal controls intended to determine whether there is reasonable assurance that the boards are in compliance with statutes, regulations, and the board’s and AMS’s policies and procedures. AMS management reviews are to include reviews of check registers, contract and subcontract samples, assessments collected, and travel reimbursements, among other items. AMS’s Guidelines for AMS Oversight of Commodity Research and Promotion Programs, most recently updated in September 2015, is designed to facilitate the application of legislative and regulatory provisions of the check-off programs and promote consistency in AMS’s oversight of the 22 check-off programs. These guidelines, which pertain to AMS as well as board members and board staff, are not intended to cover the daily responsibilities of board operations or AMS’s oversight. Instead, the guidelines provide broad information on AMS’s expectations for how boards should operate and how AMS will oversee the programs in activities such as budget approval, contracts, financial accountability, referendum, and investments, among other items. In March 2012, USDA OIG released a report on AMS’s oversight of check-off programs. The work was initiated by the OIG after a 2010 investigative report, conducted at the request of the AMS Administrator, identified the possibility of weak oversight controls over the check-off boards. The 2012 report included two recommendations for AMS to develop and implement (1) standard operating procedures that provide detailed instructions for performing oversight activities to address all areas listed in the agency’s guidelines and (2) guidance for conducting periodic internal reviews of program area operations to ensure the enforcement of AMS’s guidelines. AMS agreed with the two recommendations and planned to implement them with a variety of actions, as discussed below. AMS Has Improved Its Oversight of Check-off Programs, but Some Oversight Activities Are Not Consistent across Programs AMS has responded to recommendations for improving oversight made in the OIG’s 2012 report, particularly by developing and implementing standard operating procedures and conducting internal reviews of AMS check-off program oversight. However, AMS does not provide consistent oversight across check-off programs in some areas; specifically, it does not routinely review check-off program subcontracts during its management reviews, conduct follow-up on management review recommendations, ensure that financial assurances are included in annual audits, or ensure that check-off boards share information with assessment payers on program websites. In conducting their oversight of the check-off programs, senior agency officials and marketing specialists said they face challenges because of increased use of social media, the absence of an information system for tracking approvals, and complex Freedom of Information Act (FOIA) requests for some programs, which may delay the completion of some oversight priorities. AMS Has Made Improvements in Response to Recommendations Made by USDA’s OIG The OIG’s 2012 report included two recommendations that AMS has since implemented: to develop and implement (1) standard operating procedures and (2) guidance for conducting periodic internal reviews of its oversight activities. In August 2013, AMS developed and implemented its standard operating procedures, which provide marketing specialists with more detailed guidance on the various oversight activities that are outlined in the agency’s program guidelines. The standard operating procedures cover a range of oversight activities, including budget review, contract review, advertising and promotional materials review, and financial and internal control oversight. Included in the more detailed guidance are various checklists that marketing specialists can use to itemize the requirements that boards must meet in a variety of areas. For example, the budget review checklist includes a list designed to ensure that budgets conform to law and contain, among other items, accurate sums and categories, as well as clearly listed administrative expenses. According to senior agency officials and marketing specialists, the standard operating procedures have assisted AMS in providing consistency across the 22 commodity check-off programs, have helped ensure that oversight responsibilities are carried out, and have provided documentation of specific duties for new marketing specialists. In response to the OIG’s second recommendation, AMS has developed and implemented guidance for conducting internal reviews of its oversight of check-off programs. Internal reviews are conducted by AMS’s Management and Analysis Program group to evaluate whether the AMS commodity areas that oversee check-off programs employ controls that provide reasonable assurance that the check-off programs are meeting legislative and regulatory requirements. According to an AMS directive, internal reviews of each of the four AMS commodity areas are to be conducted on a rotating basis. An AMS internal review of the Cotton commodity area was completed in November 2014, an internal review of the Specialty Crops commodity area was completed in September 2015, and an internal review of the Dairy commodity area was competed in May 2017. According to officials in the Management and Analysis Program, the Livestock, Poultry, and Seed commodity area internal review began in May 2017. The Cotton internal review found the program to provide reasonable assurance that the boards were complying with legislative requirements and that the oversight controls were adequate and functioning as intended. The Specialty Crop internal review found that the commodity area was fulfilling its oversight responsibilities but also found opportunities to strengthen control practices, including ensuring consistent and timely application in its use of checklists and tracking management reviews to ensure that they are completed and issued in a timely manner. As a result, the Specialty Crop commodity area implemented changes to its use of checklists and agreed to complete management reviews in a timely manner. The Dairy internal review found opportunities to strengthen oversight, primarily with regard to management reviews and recordkeeping. As a result, according to senior agency officials, the Dairy commodity area has implemented changes to its management review and recordkeeping processes. AMS Does Not Provide Consistent Oversight across Check-off Programs in Some Areas We identified four areas in which AMS does not provide consistency in its oversight across its check-off programs: (1) review of subcontracts, (2) follow-up on recommendations made to check-off boards, (3) ensuring that independent financial audits contain statements of assurance, and (4) ensuring that information is available on program websites for assessment payers (i.e., transparency). Subcontracts. The 2012 OIG report found that AMS did not recognize in its guidelines for check-off programs that its oversight role extended to monitoring subcontracts. Following the release of the OIG report, AMS updated the guidelines to respond to the OIG finding. Under the 2015 AMS guidelines, marketing specialists are to review a sample of subcontractor expenses during their management reviews. However, we found that AMS did not similarly update its standard operating procedures for the check-off programs and that these reviews are not being done consistently across programs. We found that the marketing specialist for one of the eight programs we reviewed chose a sample of subcontracts for the management review and documented this selection in the management review report. Marketing specialists for three of the programs said they reviewed subcontracts only if the sample of primary contracts that were part of the management review included subcontracts. Marketing specialists for the other four programs said they did not review subcontracts. Two marketing specialists we interviewed said they do not select a sample of subcontracts because check-off boards are responsible for overseeing and monitoring subcontracts. Senior agency officials and marketing specialists also noted that they review and approve all promotional materials regardless of whether any material is from a contract or subcontract. Senior agency officials also said that the contracting process differs among the various check-off boards and may cause confusion about what is considered a subcontract for purposes of a management review. For example, the cotton board contracts with Cotton Inc. to carry out the program’s research and promotion activities; Cotton Inc. may, in turn, contract with entities to carry out those research and promotion activities—considered a cotton board subcontract. This is in contrast to processes of other boards, such as the honey board, which can directly contract with entities to carry out research and promotion activities; those contractors may, in turn, subcontract duties. In addition, the potential exists for subcontract costs to total hundreds of thousands of dollars. A 2010 OIG investigative review found that a subcontractor of one check-off board used subcontracts to pay employees unauthorized bonuses of about $302,000. Without revising its standard operating procedures for check-off programs to recognize that each management review is to include a sample of subcontracts for review, AMS’s ability to prevent misuse of subcontract funds is impaired. Recommendation follow-up. Under AMS’s guidelines and standard operating procedures, marketing specialists are to ensure that corrective actions are taken by the boards in a timely manner if a matter is recommended in the management review, conducted every 3 years. For example, the standard operating procedures state that the board has 30 calendar days from the receipt of the management review report to respond to the findings by formal letter and that follow-up should include appropriate documentation of the corrective actions taken. The 2012 OIG report found that there was little consistency among AMS commodity areas regarding the reporting of management review results and follow- up procedures. Four of the check-off programs we reviewed obtained written confirmation from boards about how they intended to address issues identified during management reviews consistent with the standard operating procedures; three others did not obtain written confirmation, but said they obtain any check-off board plans for remediation via less formal means, such as via e-mails or during board meetings. The management review for the eighth program did not contain any recommendations. According to marketing specialists we interviewed, the follow-up process to ensure that boards have taken corrective actions is also informal—a specialist learns how management review recommendations have been implemented by attending board and committee meetings. Senior agency officials verified that AMS has no mechanism for tracking follow- up with check-off boards to ensure that they have taken corrective actions. Under federal internal control standards, management should remediate identified internal control deficiencies on a timely basis and, with oversight from the oversight body, monitor the status of remediation efforts so that they are completed on a timely basis. Without establishing a mechanism for documenting and tracking follow-up with check-off boards on the implementation of management review recommendations, AMS has no assurance that it is consistently monitoring the status of corrective actions. Senior agency officials said that having a formal method to track and follow up on management review recommendations would allow them to identify trends, best practices, and similar emerging issues among the check-off programs. Independent financial audits. Each year, each check-off board is required by law to hire an independent audit firm to conduct an audit of the board’s financial statements in accordance with generally accepted government auditing standards. This audit helps to ensure compliance with legislative, regulatory, and policy directives. AMS guidelines direct marketing specialists to review the annual financial audits to determine whether the auditor identified any misuse of board funds and if the audit adequately addressed whether (1) funds were discovered to be used for influencing government policy or action, (2) the board adhered to the AMS investment policy, (3) internal controls over funds met auditing standards, (4) funds were used only for projects and other expenses authorized in a budget approved by USDA, and (5) funds were used in accordance with AMS guidelines. The standard operating procedures state that AMS is to ensure that audits contain these five statements of assurance, and they state that the audit firm is to express an opinion on the financial statements of the board and include a report on internal controls and compliance with applicable laws and regulations. The 2012 OIG report found that none of the independent audit reports included the five statements of assurance for the 18 check-off boards reviewed. In our sample, audit reports for four of the eight programs included the five statements of assurance. For two of the programs in our sample, the engagement letters, which document the agreed-upon terms of the audit, contained all five assurances, but the audit reports did not contain the five assurances. For the remaining two programs in our sample, neither the engagement letters nor the audit reports contained all five assurances, but senior agency officials said that the AMS marketing specialists for those two programs ensured that these assurances were adequately addressed during pre- and post-audit meetings. According to marketing specialists we interviewed for those two programs, audits following government auditing standards incorporate the requirements and are fulfilled by a general statement that boards were in compliance with laws and regulations. However, the 2012 OIG report found that an independent auditor did not include the specific assurances because the auditor was not asked to perform such work and only minimal adjustments would be needed to provide for those assurances. Without ensuring that its annual independent financial audits include the five statements of assurance as outlined in the standard operating procedures, AMS will have less certainty that check-off funds are not subject to waste, fraud, or mismanagement. Transparency. According to the Business Roundtable and the Organisation for Economic Cooperation and Development’s principles of corporate governance, a strong disclosure regime that promotes transparency is central to stakeholders being able to access regular, reliable, and comparable information. As check-off programs use assessment money collected from stakeholders of the commodity being promoted, AMS’s guidelines state that both transparency and oversight of the check-off funds are critical. Moreover, AMS’s guidelines state that annual budget summaries should be posted on the check-off board’s website and that three additional documents are either to be on the website or otherwise made available: (1) the bylaws and policy statements, (2) annual reports, and (3) the independent economic evaluation of effectiveness. Four of the eight check-off programs in our sample posted all four documents on the programs’ websites. All eight check-off programs posted their annual reports online. Four of the check- off programs, however, did not post to their websites at least one of the remaining documents—the budget summary, bylaws, or independent economic evaluation. Marketing specialists we interviewed said that boards would supply information not included on the websites if an assessment payer requested such information, which is consistent with AMS guidelines. Board executives we interviewed from those programs that do not post all four documents on their websites also said that they would supply the information to assessment payers if contacted. Senior AMS officials also said that there are stakeholders who may not have computers or access to the Internet and may therefore request information via postal mail. Industry organization representatives we interviewed said that transparency of how funds are used and the effectiveness of the programs are important to their members. One industry organization representative we interviewed said that, although some stakeholders may not use the Internet, posting information on how assessments are being used, such as the information provided in annual reports, is useful for stakeholders and builds trust among check-off boards and stakeholders. Posting information on the boards’ websites could convey information to stakeholders who have access to the Internet at a low cost. Without including in the guidelines and standard operating procedures that all four key check-off board documents (i.e., bylaws and policy statements, annual reports, and independent evaluations of economic effectiveness) should be posted on a check-off program’s website, AMS may be missing an opportunity to ensure that some assessment payers have access to information on program operations and effectiveness. AMS Officials Identified Challenges in Their Efforts to Oversee Check-off Programs AMS officials identified ongoing challenges in check-off program oversight. In particular, AMS marketing specialists and senior agency officials identified three challenges: (1) the increase in some check-off boards’ use of social media, (2) the absence of an information system to track approvals, and (3) complex and time-consuming FOIA requests for some programs. Because of competing priorities, some oversight duties may be delayed as a result. Increase in boards’ social media efforts. According to marketing specialists, four of the eight check-off programs have seen a significant increase in the boards’ use of social media, which has been a challenge in terms of both workload and the need for additional AMS guidance because the specialists must approve the social media content. Marketing specialists for the other four programs said that the check-off programs they oversee have not yet increased their social media presence enough to make it a challenge for workload. Senior agency officials and marketing specialists agreed that oversight of the check-off programs requires a significant amount of time and effort that has been made more complicated since some check-off programs began using social media. For example, a marketing specialist for one check-off program approved over 3,000 items, including social media for promotional and research materials, in a 6-month period. According to this marketing specialist, depending on the complexity of the item needing approval, there could have been dozens of communications between the specialist and the check-off board staff. In addition, marketing specialists and senior agency officials said that because social media is constantly evolving, AMS has needed to reevaluate its guidance to boards for social media. The senior agency officials acknowledged that the duties of marketing specialists are demanding and that they are working to find ways to provide support to marketing specialists. Senior agency officials said that this is challenging because the boards must reimburse AMS for oversight costs, so any additional personnel would be paid for through check-off assessments. Also, AMS established a social media committee made up of marketing specialists who have drafted social media guidance for the boards to follow. Technology. Tracking the numerous promotional and research approvals can be a challenge for some AMS marketing specialists because of the absence of an information system to track approvals. According to two marketing specialists, during busy times, they may be handling more than 20 requests for approvals a day. While marketing specialists for two of the check-off programs we reviewed said that the use of approval tracking software, paid for by the respective check-off boards, has made their oversight function more efficient, other marketing specialists said that they must rely on e-mail messages to organize the status of approvals. Marketing specialists who have tracking software said that they can quickly see the status of any approval at any given time; further, check-off board staff can also use the software to prioritize approvals. One marketing specialist said that, although she had developed a system for organizing e-mails, a tracking system used by both AMS and the board would ensure that oversight activities would not be delayed and could expedite the approval process. Senior agency officials said that it would be helpful if each marketing specialist had this software but that the check-off boards would need to pay for this expense. FOIA requests. Responding to complex FOIA requests about check-off programs has been a challenge, according to senior agency officials, marketing specialists, and board executives of four of the eight programs we reviewed. Some requests do not take many resources to fulfill, but others take significant time and money. For example, to respond to a FOIA request, board staff and marketing specialists must identify pertinent documents; review them to ensure that there is no proprietary or sensitive information; and, as needed, involve the board’s legal counsel or third-party businesses. According to senior agency officials, in one case, a FOIA request resulted in the check-off board and AMS providing approximately 10,000 documents to the requester. AMS estimates that in fiscal year 2016, for the Livestock, Poultry, and Seed commodity area programs, it cost the agency about $182,000 and more than 2,700 hours to fulfill FOIA requests. For the same period for the Dairy commodity area programs, AMS estimates that it cost over $365,000 and about 6,600 hours to fulfill FOIA requests. Because AMS is reimbursed for its oversight costs, the funds to cover FOIA-related costs come directly from check-off assessments. These cost estimates do not include check-off board staff resources utilized to fulfill FOIA requests. Senior agency officials said that there are legal constraints on the types of individuals and organizations that they can request cover fees associated with document retrieval under FOIA. Check-off Evaluations Generally Indicate Positive Returns but Vary in How They Are Conducted and Reviewed Independent economic evaluations of the effectiveness of check-off programs, conducted at least every 5 years, have generally shown a positive benefit to those who pay assessments. The evaluations we reviewed varied both in the methods used to conduct the analysis and how information was reported and revealed certain methodological limitations. According to senior agency officials as well as the economists who conducted the evaluations, the variations are in part due to the differences in check-off board resources. We found that AMS does not consistently document its review of independent economic evaluations and has no criteria established for determining what makes for a credible methodology and results. Evaluations of Check-off Programs Were Conducted Every 5 Years and Show a Range of Positive Benefits for Assessment Payers The Federal Agriculture Improvement and Reform Act of 1996 requires check-off boards to (1) fund independent economic evaluations of the effectiveness of their promotion activities every 5 years, (2) submit the evaluation to USDA, and (3) make the results available to the public. Check-off boards, through a request for proposals process, contract for independent economic evaluation to determine the effectiveness of promotion activities. The law does not specify how an independent economic evaluation should be completed, and AMS does not offer any guidance on the methodologies to use, the types of information to include, or how the results of such an evaluation are to be presented. AMS guidelines, which are available to the check-off boards, state that evaluations: (1) have a credible methodology, (2) articulate shareholder returns, and (3) present the results in a non-technical manner. The eight independent economic evaluations of check-off programs we reviewed focused on benefit-cost ratios (BCR) and returns on investment (ROI). While BCRs and ROIs are slightly different, they both measure the financial gain or loss generated from the costs of implementing a program. In both cases, economists use economic, industry-specific models to determine the benefits or economic gains from the check-off programs by isolating the impacts of program promotion dollars from other variables, such as competing products or changes in consumer income. For example, some models include the effects of changes in the prices of substitute food products, which may affect the demand for commodities. The model used in the evaluation for the beef check-off program, for instance, includes prices for both chicken and pork, as an increase in the price of chicken or pork could lead to an increase in the consumer demand for beef, regardless of check-off program activities. Other variables that may affect demand include changes in (1) consumer buying habits, (2) consumer income, and (3) government policy. These variables can either increase or decrease the demand for commodities despite the activities of check-off programs. Evaluation models may also include variables that affect the supply of a commodity, such as increased prices that send signals to farmers to increase production. Although it is difficult to capture, some commodity evaluation models also model increases in yields and acreage to determine how much the agricultural research portion of a check-off program affects the supply of the commodity. Increased supply as a result of agricultural research expenditures can also increase producer benefits and economic gains, but according to the sorghum and cotton evaluations, many of these gains cannot be immediately or directly measured. For the eight check-off programs we reviewed, the BCRs and ROIs ranged from 2.14 to 17.40. In other words, for every dollar invested in the check-off programs, the programs returned from $2.14 to $17.40 in revenue to assessment payers (see table 3). However, it is important to note that the results of the independent economic evaluations should not be compared across check-off programs because of differing methodologies, differing data, and differing demands for the products, according to economists we interviewed. Economists we interviewed and literature we reviewed suggested that although the results of an independent economic evaluation may appear large, the amount invested in promotion activities is small compared to the total value of industry sales. Therefore, the overall impact of promotion activities on the market may be small. Program referenda largely show that most assessment payers approve of check-off programs, but not all types of assessment payers may feel that they share equally in the benefits that are found through the independent economic evaluations, according to economists we interviewed. The studies we reviewed report either average or marginal measures of effectiveness, such as a BCR. Some economists we interviewed, both those who have conducted the evaluations we reviewed and those who did not, said that these types of studies do not address the level of ROIs across the distribution of check-off program payers or how much more larger-sized assessment payers receive in returns from their investment in the check-off program as compared to smaller-sized ones. This view was confirmed by representatives we interviewed from some of the industry organizations, who indicated that their members would prefer to better understand what they receive for their investment at the farm level. In addition, one economist we interviewed said that assessment payers may be skeptical of the results of independent economic evaluations of program effectiveness because while the costs are tangible, the benefits of the programs are not. That is, the producers cannot see what portion of their revenues is directly attributable to check-off program activities. Evaluations Vary in How They Are Conducted and Reveal Certain Methodological Limitations The independent economic evaluations we reviewed were conducted using different methodologies and reported different information. According to senior agency officials, evaluations likely vary because legislation does not include any details on how evaluations should be completed and the amount of resources that each check-off board has available to devote to evaluations varies. Nearly all of the economists we interviewed said that it would be useful to have minimum standards for information that should be included in the evaluations. Some independent economic evaluations used different types of models and data to estimate the benefits and costs to assessment payers. For example, for the egg and honey check-off commodity models, the evaluation used two separate types of methodologies to estimate increases in demand because of the programs’ promotional activities. Some other independent economic evaluations in our sample, such as for cotton and fluid milk, used multi-market models that incorporated components for substitute products, the foreign sector, and the government sector. Some independent economic evaluations, such as the beef evaluation, measured a marginal BCR and others measured an average benefit-cost ratio. The cotton and sorghum evaluations performed an analysis of how increases in yields, acreage, and production because of the research portion of the check-off programs affected the supply of the commodities. The independent economic evaluations also examined different time periods in their analyses, depending on the available data (see table 3). For example, the egg evaluation covered the period of 2007 through 2010, and the fluid milk evaluation covered 1995 through 2012. In addition to having different methodologies to calculate benefits and costs, information and analyses included in the independent economic evaluation reports also varied among the eight programs we reviewed. For example, the beef check-off evaluation includes a section on the optimal allocation of funds to domestic activities of the program, which is not included in any other report. Seven of the eight evaluation reports had a conclusions section. One of the evaluation reports included a recommendations section, while others did not. Although the law does not specify information required to be included in the independent economic evaluations, representatives from one industry organization we interviewed said that having the information in a consistent format could help ensure that stakeholders could compare information from one evaluation to the next for a given check-off program. The independent economic evaluations provided useful information to key stakeholders and the general public, but we found that they also included a number of caveats and limitations. Some of these limitations resulted from the nature of a commodity or program itself and others from the modeling procedures used. According to economists we interviewed and senior agency officials, the law is not prescriptive about how evaluations are to be conducted, and the boards differ in the amount of resources available to devote to the evaluations. If, for example, a board has limited resources available for an evaluation, there may not be funds available to purchase a certain set of data. For the sample of eight evaluations we reviewed, these limitations included the following: Data limitations: A number of the independent economic evaluations had data limitations. For example, one independent economic evaluation (highbush blueberry check-off program) lacked either wholesale or retail price data for its demand model, and another (sorghum check-off program) lacked program data as it had only been in existence for 5 years when the evaluation was performed. All of the economists we interviewed who had completed the eight evaluations we reviewed said that data are a challenge when conducting the evaluations either because such data do not exist or the check-off boards do not have the resources to buy the data. Not discounting the BCR to present value: The cotton check-off evaluation was the only one in our sample with a methodology that discounted the BCR to present value to account for the time value of money. Discounting a program’s benefits and costs to present value transforms gains and losses occurring in different time periods to a common unit of measurement. Not accounting for spillover effects: Some independent economic evaluations did not include the spillover effects—the cross- commodity impact of promotion—on related markets, though some, such as the cotton evaluation, did account for spillover effects on competing commodities. If spillover effects pertain to a commodity, failure to account for these effects could overstate the benefits of a program and cause an upward bias in computing the BCR. Not adjusting models for structural changes: Some independent economic evaluations did not adjust models for structural changes in the industry over time. While some independent economic evaluations we reviewed, such as those for the honey and beef check-off programs, did use data or methods that accounted for changes in market structure over time, others did not. For example, for the pork check-off program, some hog farms have specialized in a single phase of production, and have encountered substantial gains in productivity because of technology over the past several decades, but the independent economic evaluation did not reflect this. Failure to correct for such structural change, if applicable to a commodity, can lead to incorrect modeling and misleading policy implications. AMS’s standard operating procedures acknowledge that each check-off program varies in size and scope; therefore, the amount of resources each program can devote to an independent economic evaluation varies. Smaller programs may have independent economic evaluations that reflect the realities of program scope, financial capability, and data availability. Our discussions with the economists who conducted the evaluations that we reviewed confirmed that this is the case. They said that the smaller programs are able to devote fewer resources to independent economic evaluations; therefore, the economist conducting an evaluation may not be able to complete all of the analysis that could be completed for a larger program that is able to pay for more complex analysis. According to senior agency officials, in some instances, a broader evaluation is not necessary because of the emphasis and goals of the program. In addition, the resources a board is able to devote may vary from evaluation to evaluation. For example, one economist said that he worked with a board that wanted a more comprehensive evaluation than was previously done. The new evaluation model included additional data over a longer period of time, which ultimately led to an increased ROI. AMS Does Not Consistently Document Reviews of Check-off Evaluations In addition to ensuring that independent economic evaluations are conducted every 5 years and encouraging boards to make them available to assessment payers, AMS’s standard operating procedures state that marketing specialists should ensure that independent economic evaluations (1) have a credible methodology and results, (2) articulate shareholder benefits, and (3) present results in a non-technical manner. To verify that these three directives are met, the standard operating procedures state that marketing specialists may consult with agency economists. They are directed to document verification in writing. Outside of any agency review, there is no requirement that independent economic evaluations be peer reviewed. A National Academies report states that peer review is characterized, in part, as being a documented, critical review of assumptions, calculations, and methodology, performed by a person with technical expertise in the subject matter to be reviewed who is independent and external of the work being reviewed. The report further states that the peer, to the extent possible, should have sufficient freedom from funding considerations to ensure that the work is impartially reviewed. According to senior agency officials, AMS economists meet this definition; and their review of the independent economic evaluations can be considered peer review. Officials said that the economists on staff critically review the evaluations; they all have PhDs in economics and are independent as they do not work directly with the check-off programs except for reviewing the evaluations. Three of the four AMS commodity areas—Cotton and Tobacco, Dairy, and Specialty Crops—utilized an AMS economist to review the independent economic evaluations and document that review. Senior agency officials said that the Livestock, Poultry, and Seed commodity area has an AMS economist review the independent economic evaluations but does not document that review. According to senior agency officials, the Livestock, Poultry, and Seed commodity area has relied on informal reviews of the evaluations by an economist, which are orally presented to the director of the commodity area. Further, the economists who completed the eight independent economic evaluations we reviewed indicated that although their preference is to have the evaluations peer reviewed, this is not always possible because of time constraints and other priorities. One economist said that the board he worked with included a contractual requirement that the independent economic evaluation be peer reviewed. Because the Livestock, Poultry, and Seed commodity area does not document its reviews of independent economic evaluations, only four of the eight check-off programs in our sample had documented reviews of the evaluations. All four of the documented reviews ensured that the independent economic evaluations had a credible methodology and results and articulated shareholder benefits, as stated in the standard operating procedures. However, only two of these four check-off programs included in their documented review whether results were presented in a non-technical manner, as also stated in the standard operating procedures. Further, the internal reviews did not use standard criteria to determine whether the independent economic evaluations had a credible methodology or results, which is important because, as noted earlier, the evaluations we reviewed varied in their methodology and we found that they had certain limitations. Although check-off programs are not subject to the guidelines in the Office of Management and Budget’s Circular A-94, the circular provides general guidance for conducting analyses to help federal agencies efficiently allocate resources through well-informed decision making. For example, Office of Management and Budget Circular A-94 establishes key elements of an economic analysis, including (1) a statement of the objective and scope of the analysis, (2) an identification of alternatives, (3) an analysis of the economic effects, (4) a sensitivity analysis, and (5) adequate documentation and transparency. Conducting and documenting reviews of independent economic evaluations using criteria can be useful. For example, in 2014, a senior agency official found several inconsistencies in a check-off program independent economic evaluation. The senior agency official assigned an AMS economist and marketing specialist to work with the evaluator to revise econometric models to more accurately capture the activities of the check-off program. According to the official, if the independent economic evaluation had not been reviewed, benefits of the program would have been understated and would have misled those paying into the check-off program. Without developing criteria by which AMS can assess the methodology and results of independent evaluations and document those assessments to ensure that the standard operating procedures are met, the agency’s assessments of independent economic evaluations may be inconsistent across check-off programs and misleading to agency officials, check-off boards, and assessment payers. Conclusions AMS oversees commodity check-off programs that conduct research and promotion activities to strengthen 22 commodities’ position in the marketplace. The agency has taken steps to improve oversight activities based on recommendations in USDA OIG’s 2012 report, but it continues to face challenges in other oversight activities. For example, AMS has not consistently reviewed subcontracts during its management reviews. Without revising its standard operating procedures for check-off programs to recognize that management reviews should include a sample of subcontracts for review, AMS’s ability to prevent misuse of subcontract funds is impaired. In addition, AMS has not consistently followed up on recommendations made to check-off boards, although its guidelines and standard operating procedures state that marketing specialists are to ensure that corrective actions are taken by the boards in a timely manner if a matter is recommended in a management review. Without establishing a mechanism for documenting and tracking follow-up with checkoff boards on the implementation of management review recommendations, AMS has no assurance that it is consistently monitoring the status of corrective actions. Moreover, AMS has not ensured that independent financial audits contain statements of assurance as called for in the agency’s program guidelines or standard operating procedures. Without ensuring that its annual independent financial audits include the five statements of assurance outlined in the standard operating procedures, AMS will have less certainty that check-off funds are not subject to waste, fraud, or mismanagement. Further, although principles of corporate governance state the importance of transparency for stakeholders, AMS has not ensured that certain information, such as budget summaries and program evaluations, are presented on check-off program websites and has not included in its guidelines or standard operating procedures that certain information should be included on program websites, although the agency’s program guidelines recognize that transparency of check-off funds is critical. Without including in the guidelines and standard operating procedures that key check-off board documents are to be posted on the check-off program’s website, AMS may miss the opportunity to ensure that some assessment payers have access to information on program operations and effectiveness. Finally, check-off boards are meeting legislative deadlines by completing independent economic evaluations of effectiveness every 5 years; however, the evaluations vary and have certain methodological limitations. Without developing criteria by which AMS can assess whether evaluations have a credible methodology and results and documenting those assessments, the assessments may be inconsistent across check- off programs and misleading to agency officials, check-off boards, and assessment payers. Recommendations for Executive Action We are making the following five recommendations to the Administrator of the Agricultural Marketing Service: The Administrator of AMS should revise the standard operating procedures for AMS’s check-off programs to state that management reviews include a sample of subcontracts for review. (Recommendation 1) The Administrator of AMS should establish a mechanism for documenting and tracking follow-up with check-off boards on the implementation of management review recommendations. (Recommendation 2) The Administrator of AMS should ensure that annual independent audits include the five statements of assurance as outlined in the standard operating procedures. (Recommendation 3) The Administrator of AMS should include in the guidelines and standard operating procedures that key check-off board documents, such as bylaws and policy statements, annual reports, and independent evaluations of economic effectiveness are posted on the check-off programs’ websites. (Recommendation 4) The Administrator of AMS should develop criteria by which to assess the methodology and results of independent evaluations and document those reviews to ensure that the standard operating procedures are met. (Recommendation 5) Agency Comments We provided a draft of this report for review and comment to USDA. An auditor with AMS’s Management and Analysis Program responded via e- mail on October 24, 2017, that the agency generally agreed with our findings and recommendations. We are sending copies of this report to the appropriate congressional committees, the Secretary of Agriculture, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or morriss@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix I. Appendix I: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, key contributors to this report included Thomas M. Cook (Assistant Director), Rose Almoguera, Kevin S. Bray, Barbara El Osta, Cindy Gilbert, Holly Halifax, Khali Hampton, Dan Royer, Holly Sasso, Sheryl Stein, and Kiki Theodoropoulos.
Why GAO Did This Study “Got milk?” and “Pork: The Other White Meat” are examples of advertising campaigns undertaken by 2 of the 22 federal agricultural research and promotion programs, commonly known as commodity check-off programs. These programs, funded by a fraction of the sale of each unit of a commodity, are led by boards consisting of industry members appointed by the Secretary of Agriculture. The programs conduct research and promotion activities to strengthen a commodity's position in the market. In 2016, check-off funds totaled over $885 million. By law, funds cannot be used for lobbying or disparaging other commodities, among other things. AMS has primary responsibility for overseeing the check-off programs. GAO was asked to review AMS's oversight of the check-off programs. This report examines (1) the extent to which AMS has addressed previously identified weaknesses in its oversight and (2) how the effectiveness of the programs has been evaluated and what the results have indicated. GAO selected a sample of 8 such programs—selected, in part, based on total funds collected—and reviewed laws, regulations, and agency guidance. GAO interviewed agency officials, check-off board executives, and economists. What GAO Found The U.S. Department of Agriculture's (USDA) Agricultural Marketing Service (AMS) has improved its oversight of check-off programs since USDA's Office of Inspector General (OIG) made recommendations in a 2012 report. In response to two OIG recommendations, AMS developed and implemented standard operating procedures, which outline specific oversight responsibilities of AMS, and began to conduct internal reviews of its oversight functions. However, GAO found that AMS does not consistently review subcontracts—a legal agreement between a contractor and third party—or ensure that certain documents are shared with stakeholders on program websites. Subcontracts. Under AMS's 2015 guidelines for check-off programs, which cover broad oversight activities, staff are to review a sample of subcontracts during agency reviews of program operations. However, AMS did not revise its standard operating procedures to match its guidelines with this responsibility, and GAO found that AMS reviewed subcontracts for only one check-off program in its sample of eight. Without revising the standard operating procedures to include a review of subcontracts, AMS's ability to prevent misuse of funds is impaired. Transparency. According to leading business principles, transparency is central to stakeholders' access to regular, reliable, and comparable information. However, GAO found that four of the eight check-off programs reviewed posted all key documents, such as budget summaries and evaluations of effectiveness, to program websites. GAO found that AMS's guidelines state that budget summaries should be posted on program websites, while the other key documents are to be available on the website or otherwise made available to stakeholders. Agency officials said that boards would supply documentation if contacted by a stakeholder. Industry representatives GAO interviewed said that transparency of how funds are used and the effectiveness of programs are important to their members. Without including in its guidelines and standard operating procedures that all key documents should be posted on a check-off program's website, AMS may miss an opportunity to ensure that stakeholders have access to information on program operations and effectiveness. Independent economic evaluations of the effectiveness of check-off programs, required by law to be conducted every 5 years, have generally shown positive financial benefits. For the eight evaluations GAO reviewed, benefits ranged from an average of $2.14 to $17.40 for every dollar invested in the programs. However, the evaluations varied in the methods used and had certain methodological limitations. For example, some evaluations did not account for the effects of promotion from competing commodities, which could overstate the programs' benefits. AMS's standard operating procedures state that the agency should review the evaluations to ensure that there is a credible methodology, among other things; however, AMS did not consistently document reviews of the evaluations or have criteria by which to review the evaluations. Without developing criteria to assess the methodology and results of evaluations, the agency's assessments of independent economic evaluations may be inconsistent across check-off programs and misleading to stakeholders. What GAO Recommends GAO is making five recommendations, including that USDA revise its standard operating procedures to include the review of subcontracts, include key documents on check-off program websites, and develop criteria to assess evaluations. USDA generally agreed with GAO's recommendations.
gao_GAO-18-474
gao_GAO-18-474_0
Background In providing health care services to veterans, clinicians at VAMCs use RME, such as endoscopes and surgical instruments, which must be reprocessed between uses. Reprocessing covers a wide range of instruments and has become increasingly complex. VHA has developed policies that VAMCs are required to follow to help ensure that RME is reprocessed correctly. In addition, VHA policy requires that VHA and VISNs oversee VAMCs’ reprocessing of RME and that VAMCs report incidents involving improperly reprocessed RME. Complexity of RME Reprocessing According to reports from RME professional associations, the complexity of RME reprocessing has increased as the complexity of medical instruments has increased. While at one time reprocessing surgical and dental instruments such as scalpels and retractors might have been the bulk of a SPS program’s tasks, now SPS programs are responsible for reprocessing complex instruments such as endoscopes. Reprocessing these instruments is a detailed and time-consuming process, and their increasing complexity requires a corresponding increase in the skills and time required to safely reprocess them. (See figure 1 for an example of steps that can be required for endoscope reprocessing.) VHA Roles and Responsibilities for RME Reprocessing Within VHA, the National Program Office for Sterile Processing under the VHA Deputy Under Secretary of Health for Operations and Management is responsible for developing RME reprocessing policies. It is also responsible for ensuring that VISNs and their respective VAMCs are adhering to its policies. Each of the 18 VISNs are responsible for ensuring adherence with VHA’s RME policies at the VAMCs within its region. In turn, each of the 170 VAMCs are responsible for implementing VHA’s policies related to RME. Within each VAMC, the SPS department is primarily responsible for reprocessing RME, which is used by clinicians in the operating room and other clinical service lines such as the dental and gastroenterology service. (See fig. 2.) Additionally, the SPS department collaborates with other VAMC departments such as the Environmental Management and Engineering Services on variables that affect RME reprocessing, such as the climate where RME is reprocessed. VHA Policies for RME Reprocessing and Related Oversight In March 2016 VHA issued Directive 1116(2)—a comprehensive policy outlining requirements for SPS programs and for overseeing RME reprocessing efforts. SPS program operation requirements. To help ensure that VAMCs are reprocessing RME correctly, VHA policy establishes various requirements for the SPS programs in VAMCs to follow, such as a requirement that SPS staff monitor sterilizers to ensure that they are functioning properly, use personal protective equipment when performing reprocessing activities, separate dirty and clean RME, and maintain environmental controls. For example, VAMCs are required to maintain certain temperature, humidity, and air flow standards in areas where RME is reprocessed and stored. Additionally, in order to ensure that RME is reprocessed in accordance with manufacturers’ guidelines, VAMCs are required to assess staff on their competence in following the related reprocessing steps. Oversight requirements. To help ensure that VAMCs are adhering to VHA’s RME policies, VHA requires inspections, reports on incidents of improperly reprocessed RME, and corrective action plans for both non- adherent inspection results and incidents of improperly reprocessed RME. Inspections. VISNs are required to conduct annual inspections at each VAMC within their VISN and to report their inspection results to the VHA National Program Office for Sterile Processing. The VISN inspections are a key oversight tool for regularly assessing adherence to RME policies in the SPS, gastroenterology, and dental areas within VAMCs and use a standardized inspection checklist known as the SPS Inspection Tool. According to VHA officials, VHA developed the SPS Inspection Tool and generally updates it annually. The most recent fiscal year 2017 SPS Inspection Tool contained 148 requirements. Examples of requirements include those regarding proper storage of RME and following manufacturers’ instructions when reprocessing RME. Although VAMCs are also required to conduct annual self-inspections using the SPS Inspection Tool and report the results to VHA, the VISN annual inspections are a separate and important level of oversight. Finally, according to VHA officials, while not a formal policy, VHA’s National Program Office for Sterile Processing also inspects each VAMC at least once every 3 years. VHA requires VISNs and VAMCs to conduct their own inspections even in years when VHA also conducts inspections. Incident Reports. VHA collects incident reports or “issue briefs” generated by VAMCs on incidents involving RME to help determine the extent to which VAMCs are adhering to RME policies, among other things. VHA requires VAMCs to report significant clinical incidents or outcomes involving RME that negatively affect groups or a cohort of veterans in an issue brief. According to a VHA official, when VAMC staff report incidents involving RME to their facility leadership, these officials should follow VHA guidance to determine which incidents, if any, should be reported in an issue brief to the VAMC’s VISN. Similarly, VISN officials, in turn, are responsible for determining whether an incident should be reported in an issue brief to VHA. Corrective Action Plans. Corrective action plans—which detail an approach for addressing any areas of policy non-adherence identified in inspections or incidents identified in issue briefs—are required at both the VISN and VAMC levels. Specifically, both VISNs and VAMCs are required to develop corrective action plans for any deficiencies identified through their inspections, and VAMCs are required to develop corrective action plans for incidents identified in issue briefs. According to a VHA official, VISNs and VAMCs are not required to send corrective action plans from inspections to VHA; however, VAMCs must send their correction action plans to the VISN and also any related to issue briefs to VHA. Further, according to a VHA official, although neither the VAMC nor VISN corrective action plans from inspections are monitored by VHA, VHA does expect VISN officials to inform it of any critical issues that VISNs believe warrant VHA attention. For example, VHA officials would expect VISNs to report instances when RME issues result in the cancellation of procedures for multiple patients or when the VISN discovers a VAMC is lacking documentation of RME reprocessing competency assessments for a large number of their SPS staff. Reports on Issues Related to RME Reprocessing A number of recent reports have identified several RME-related issues at VAMCs, including non-adherence to RME policies. The issues have ranged from improperly reprocessed RME being used on patients to the cancellation of medical procedures due to lack of available RME. For example: In March 2018, the VA Office of Inspector General released a report describing problems identified at the Washington, D.C. VAMC, some of which were RME-related. For example, the office determined that ineffective sterile processing contributed to procedure delays due to unavailable RME. The report included specific recommendations, such as ensuring there are clearly defined and effective procedures for replacing missing or broken instruments and implementing a quality assurance program to verify the cleanliness, functionality, and completeness of instrument sets before they are used in clinical areas. The VAMC Director agreed with those recommendations. In fiscal year 2017, the VA Office of Inspector General reviewed 29 VAMCs and issued reports for each in response to several RME- related complaints received through its reporting hotline. The office identified issues such as staff failure to perform quality control testing on endoscopes or document their competency assessments of SPS staff in employee files. Many of the reports included specific recommendations, such as performing quality control testing on all endoscopes and ensuring SPS staff are assessed for competency at orientation and annually for the types of RME they reprocess. The VAMC Directors agreed with those recommendations. In 2016, the VA Office of the Medical Inspector released a report that substantiated allegations that SPS practices led to the delivery of RME with bioburden, debris, or both to the operating room. The report included specific recommendations, such as reeducating SPS staff on proper SPS standards and ensuring that all training and assessments of RME reprocessing competency of SPS staff are completed as required. The VAMC Director agreed with those recommendations. In 2011, we released a report on VA RME that found issues with RME reprocessing. We found, for example, that VHA did not provide specific guidance on the types of RME that require device-specific training and that the guidance VHA did provide on RME reprocessing training was conflicting. We issued several recommendations for improvement, which VA has implemented. VHA’s Oversight Does Not Provide Reasonable Assurance that VAMCs Are Following RME Policies VHA Does Not Have Complete Information on Adherence to RME Policies from Inspections of VAMCs VHA has not ensured that it has complete information from the annual inspections VISNs conduct—a key oversight tool providing the most current VA-wide information on adherence to RME policies—and therefore does not have reasonable assurance that VAMCs are following RME policies intended to ensure veterans are receiving safe care. For fiscal year 2017, we determined that VHA should have had records of 144 VISN SPS inspection reports to have assurance that all required VISN SPS inspections had been conducted. However, our review shows that as of February 2018, VHA had 105 VISN SPS inspection reports and was missing 39, or more than one quarter of the required inspection reports. We also determined that there were two VISNs from which VHA did not have any fiscal year 2017 reports. For the missing SPS inspection reports, VISN officials suggested several reasons why the inspections were either not conducted or conducted but the reports were not submitted to VHA. For example, officials from one of the VISNs from which VHA had no SPS inspection reports told us that VISN management staffing vacancies prevented it from conducting all of its inspections. An official from the other VISN from which VHA had no SPS inspection reports provided evidence that it had conducted all but one of the inspections, but the official told us the VISN did not submit reports because it has yet to receive information from VHA regarding VISN inspection outcomes, common findings, or best practices and therefore sees no value in submitting them. VISNs provided us with evidence showing that they conducted 27 of 39 inspections that were missing from VHA’s data. We analyzed these 27 reports to identify the information about non-adherence to RME policy requirements that VHA does not have from these missing VISN inspections. We determined the 10 requirements for which these VAMCs had the most non-adherence were related to quality, training, and environmental issues, among other things, with the extent of non- adherence ranging from 19 to 38 percent. For example, there were 19 and 26 percent non-adherence rates to the requirements that instrument and equipment levels be sufficient to meet workloads and having a process in place to ensure staff receive make-up/repeat training, respectively. (See Appendix I.) We also found that variation in SPS Inspection Tools and related guidance from VHA resulted in incomplete inspection results for the gastroenterology and dental areas. VHA provided VISNs with three different SPS Inspection Tools throughout the course of fiscal year 2017. Although VHA guidance stated otherwise, only the third SPS Inspection Tool—which was used during the second half of the fiscal year—contained requirements specific to the gastroenterology and dental areas. A VHA Central Office official told us the office hadn’t been aware that it did not have all of the VISN inspection reports until it took steps to respond to our data request. The official told us VHA granted VISNs a 3- month extension for fiscal year 2017—meaning that VISNs had until the end of December 2017 to submit their inspection results—and had granted similar extensions for at least the past 4 fiscal years as well. For all of those years, the VHA official told us that the office didn’t have all VISN inspection reports, even after granting extensions. As a result, VHA did not have assurance that all of the inspections had been conducted. When asked why VHA hadn’t been aware that it didn’t have all VISN SPS inspection reports, a VHA official said that the office has largely relied on the VISNs to ensure complete inspection result reporting because it hasn’t had the resources to dedicate to monitoring inspections. The official told us that VHA has asked for and just recently received approval to hire a data analyst who could potentially be responsible for monitoring the VISN inspection reports. VHA’s lack of complete information from inspection results is inconsistent with standards for internal control in the federal government regarding monitoring and information that state management should establish and operate monitoring activities and use quality information to achieve the entity’s objectives. Without such controls, VHA lacks reasonable assurance that VAMCs are following RME policies designed to ensure that veterans are receiving safe care. VHA Does Not Consistently Share Information that Could Help VAMCs Follow RME Policies We also found that VHA does not consistently share information, particularly inspection results, with VISNs and VAMCs, and that VISNs and VAMCs would like more of this information. Specifically, about two- thirds of VISN and VAMC officials told us that sharing information on the common issues identified in the inspections of other VAMCs as well as potential solutions developed to address these issues would allow VAMCs to be proactive in strengthening their adherence to RME policies and ensuring patient safety. For example, a VAMC official told us that there were problems with equipment designed to sterilize heat- and moisture sensitive devices, and seeing how other VAMCs addressed the problem was helpful for their VAMC. Further, officials from some VISNs said VHA cited their VAMCs for issues that had been found at other facilities and, had the VAMCs been aware of the issue beforehand, they could have corrected or improved their processes earlier. When asked about sharing inspection results and other information, VHA Central Office officials told us the office doesn’t analyze or share information from VISN inspections because of a lack of resources. A VHA official told us that the office does create an internal report of common issues identified through the third of VAMCs it inspects each year, but the office doesn’t share this report with VISNs and VAMCs because the office lacks the resources needed to prepare reports that are detailed enough to be understood correctly by recipients. According to this official, VHA has occasionally shared information it has identified on common inspection issues through newsletters, national calls, and trainings; however, VHA officials at close to half of the VISNs and VAMCs we spoke to said that they rarely or never get this information. For example, officials from one VISN told us they recall only one or two instances where VHA sent a summary of the top five RME-related issues found during inspections. Insufficient sharing of information is inconsistent with standards for internal control in the federal government regarding communication, which state that management should internally communicate the necessary quality information to achieve the entity’s objectives. Until this sharing becomes a regular practice, VHA is missing an opportunity to help ensure adherence to its RME policies, which are intended to ensure that veterans receive safe care. VAMCs Report Facing Challenges Related to RME Policies and Workforce Needs, but VHA Has Not Sufficiently Addressed These Challenges According to interviews with officials from all of the VISNs and selected VAMCs, the top five challenges VAMCs face in operating their SPS programs are related to meeting certain RME policies and challenges addressing SPS workforce needs. In particular, officials told us that VAMCs have challenges (1) meeting two RME policy requirements related to climate control monitoring and a reprocessing transportation deadline, and (2) addressing SPS workforce needs related to lengthy hiring timeframes, the need for consistent overtime, and limited pay and professional growth. (See Table 1.) Regarding the challenges VAMCs face in meeting RME policy requirements, the majority of VISN and selected VAMC officials interviewed reported experiencing challenges adhering to two requirements from 2016 VHA issued Directive 1116(2). Climate control monitoring requirement. Officials reported that meeting the climate control monitoring requirement related to airflow and humidity is challenging for their VAMCs. Under the requirement VAMCs must monitor the humidity and airflow in facility areas where RME is reprocessed and stored in order to ensure that humidity levels do not exceed a certain threshold and thereby allow the growth of microorganisms. According to almost all VISN officials, meeting the requirement is a challenge for some, if not all, of their VAMCs and in particular for older VAMCs that lack proper ventilation systems. We also found some instances of non-adherence on this issue in the group of VISN inspection reports we reviewed. In a September 2017 memorandum, VHA relaxed the requirement (e.g., adjusted the thresholds). Additionally, according to a VHA official, VHA wants to renovate all outdated VAMC heating, ventilation, and air conditioning systems to help VAMCs meet the requirement. Further, according to VHA officials, VHA also allows VAMCs to apply for a waiver exempting them from having to meet this requirement if they have an action plan in place that shows they are working toward meeting the requirement. Reprocessing transportation deadline requirement. Officials reported that meeting the reprocessing transportation deadline was also challenging for their VAMCs. Under the requirement, used RME must be transported to the location where it will be reprocessed within 4 hours of use to prevent bioburden or debris from drying on the instrument and causing challenges with reprocessing. Officials reported this requirement as particularly challenging for VAMCs that must transport their RME to another facility for cleaning, such as community based outpatient clinics in rural areas that must transport their RME to their VAMC’s SPS department. We also found some instances of non-adherence on this issue in the group of VISN inspection reports we reviewed. In June 2016, VHA issued a memorandum allowing the use of a pre-cleaning spray solution that, if used, allows offsite facilities such as community based outpatient clinics to transport that RME within 12 hours instead of the required 4 hours. VHA has made some adjustments to these requirements, although some officials told us the requirements remain difficult to meet. Specifically, over half of the VISN officials reported that the climate control monitoring requirement continues to be a challenge for their VAMCs. Further, some of the officials told us that meeting the 12-hour reprocessing transportation requirement using the pre-cleaning spray was still challenging, due to the distance between clinics and their VAMC’s SPS department; as a result, some facilities have decided to use disposable medical equipment that does not require reprocessing to avoid this requirement completely. When we shared this information with a VHA official, the official stated that providing general information on how all facilities can meet the climate control monitoring requirement is impossible due to the uniqueness of each facility and that VHA has no further plans to adjust the reprocessing transportation deadline requirement. However, these challenges remain and some officials have expressed frustration with the limited support they’ve received from VHA. In September 2017 we recommended that VHA establish a mechanism by which program offices systematically obtain feedback from VISNs and VAMCs on national policy after implementation and take the appropriate actions. Our findings provide further evidence of the need for VA to address this recommendation. Regarding the challenges VAMCs face in meeting SPS workforce needs, almost all of the 18 VISN officials and officials from the three selected VAMCs reported experiencing challenges related to lengthy hiring timeframes, need for consistent overtime, and limited pay and professional growth. According to officials, these challenges result in SPS programs having difficulty maintaining sufficient staffing levels. Lengthy hiring timeframes. Officials reported that the lengthy hiring process for SPS staff creates challenges in maintaining sufficient SPS workforce. For example, officials from one VISN estimated that on average it can take 3 to 4 months for a person to be hired. Officials from a few other VISNs noted that not only does the lengthy hiring process create challenges in recruiting qualified candidates (because they accept other positions where they can be more quickly employed), but that it also results in long periods of time when SPS programs are short-staffed. Need for overtime. Officials reported that needing their SPS staff to work overtime is a challenge. Specifically, 16 of the 18 VISN officials stated that there is a need for staff at their VAMCs to work overtime either “all, most, or some of the time.” Further, officials from one VISN told us their VAMCs have used overtime to meet the increased workload required to implement VHA’s RME policies; one official noted that the overtime has led to dissatisfaction and retention issues among SPS staff. Limited pay and professional growth. Officials identified limited pay and professional growth associated with the current pay grade as the biggest SPS workforce challenge. Almost all officials stated that the current pay grade limits the pay and potential for professional growth for the two main SPS positions—medical supply technicians, who are responsible for reprocessing RME, and SPS Chiefs, who have supervisory responsibility. Specifically, the relatively low maximum allowable pay discourages staff from accepting or staying in positions and the current pay grade does not create a career path for SPS medical supply technicians to grow within the SPS department. Officials from one VISN told us that all VAMCs in their VISN have lost SPS staff due to the low pay grade for both positions. VHA officials said a proposed increase in the pay grade for SPS staff has been drafted; however, they do not know when or if it will be made effective. Further, according to officials with knowledge of the proposed changes, the changes could still be insufficient to recruit and retain SPS staff with the necessary skills and experience. Some VISN and VAMC officials told us that difficulties maintaining sufficient SPS staff levels have in some instances adversely affected patients’ access to care and increased the potential for reprocessing errors that could affect patient safety. According to these officials, staffing challenges can affect access to care when facilities have to limit or delay care—such as surgeries—because there aren’t enough staff available to process all the necessary RME. An official at one VAMC told us that their SPS staff must review available RME daily to determine whether scheduled surgeries or other procedures can proceed. Further, among the 18 operating room nurse managers who responded to our inquiries, 15 indicated they have experienced operating room delays because of RME issues. In addition, some VISN and VAMC officials told us staffing challenges can potentially have an impact on patient safety, because when SPS staffing is not sufficient, mistakes are more likely to occur. For example, officials told us that if SPS staffing levels are low, particularly if they are low for an extended period of time, there is an increased chance RME will be improperly reprocessed and, if used on a patient, put that patient’s safety at risk. A 2018 VA Office of Inspector General report on the Washington D.C. VAMC found that consistent SPS understaffing was a factor in SPS staff not being available to meet providers’ need for reprocessed RME; according to the report, “veterans were put at risk because important supplies and instruments were not consistently available in patient care areas.” While VHA is aware of these workforce challenges cited by VISN and VAMC officials, it has not studied SPS staffing at VAMCs. As a result, it does not know whether or to what extent the workforce challenges VISNs and VAMCs report adversely affect VAMCs’ ability to effectively operate their SPS programs and ensure safe care for veterans. A National Program Office of Sterile Processing official indicated that while the office might have access to some of the necessary data from VAMC SPS departments, it does not have all the necessary data or staff needed to assess SPS staffing levels. Furthermore, the official added, conducting such a study would not be the responsibility of her office. Officials from the Workforce Management and Consulting Office said VHA is considering a study of SPS staffing, given the results of the VA Office of Inspector General 2018 review that identified high vacancy rates as a contributing factor to the challenges with the SPS program at the Washington D.C. VAMC. However, VHA does not have definitive plans to complete this type of study or a timeframe for when the decision will be made. Until the study is conducted and actions are taken based on the study, as appropriate, VHA will not have addressed a potential risk to its SPS programs. This is inconsistent with standards for internal control in the federal government for risk assessment, which state that management should identify, analyze, and respond to risks related to achieving defined objectives. Without examining SPS workforce needs, and taking action based on this assessment, as appropriate, VHA lacks reasonable assurance that its approach to SPS staffing helps ensure veterans’ access to care and safety. Conclusions The proper reprocessing of surgical instruments and other RME used in medical procedures is critical for ensuring veterans’ access to safe care. We have previously found that VA had not provided enough guidance to ensure SPS staff were reprocessing RME correctly; in 2016, VA issued Directive 1116(2)—with requirements for the SPS program. While this is a good step, our current review shows that VHA needs to strengthen its oversight of VAMCs’ adherence to these requirements. VHA has not ensured that it has complete information from inspections of VAMCs, nor does VHA consistently share inspection results and other information that could help VAMCs meet the requirements. Without analysis of complete information from inspections and consistent sharing of this information, VHA does not have reasonable assurance that VAMCs are following all RME policies, and VHA is missing an opportunity to strengthen VAMCs’ adherence to RME requirements. Furthermore, officials from some VISNs and selected VAMCs report challenges meeting two RME policy requirements—the climate control and the reprocessing transportation deadline requirements. If VHA implements a recommendation we made in 2017 for the agency to obtain feedback from VISNs and VAMCs on their efforts to implement VHA policies and take the appropriate actions, it could help with these challenges. Additionally, while nearly all of the officials from the 18 VISNs and selected VAMCs interviewed reported challenges maintaining a sufficient SPS workforce, VHA does not know whether the current SPS workforce addresses VAMCs’ SPS workforce needs. VHA officials say that VHA is considering studying its SPS workforce; however, it has not done so or announced a timeframe for doing so. Until it conducts such a study, VHA will not know whether or to what extent reported SPS workforce challenges adversely affect the ability of VAMCs to effectively operate their SPS programs and ensure access to safe care for veterans. Recommendations for Executive Action We are making the following three recommendations to VHA: The Under Secretary of Health should ensure all RME inspections are being conducted and reported as required and that the inspection results VHA has are complete. (Recommendation 1) The Under Secretary of Health should consistently analyze and share top common RME inspection findings and possible solutions with VISNs and VAMCs. (Recommendation 2) The Under Secretary of Health should examine the SPS workforce needs and take action based on this assessment, as appropriate. (Recommendation 3) Agency Comments We provided a draft of this report to VA for comment. In its written comments, which are provided in appendix III, VA concurred with our recommendations. In its comments, VA acknowledged the need for complete RME inspection information, stating that VHA will establish an oversight process for reviewing and monitoring findings from site inspections and for reporting this information to VHA leadership. Further, VA noted that VHA will analyze data from RME inspections and share findings and possible solutions with VISNs and VAMCs via a written briefing that will be published on VHA’s website and discussed during educational sessions and national calls. VA also noted that VHA has an interdisciplinary work group that has identified actions it can take to address SPS workforce needs, including implementing an enhanced market-based approach for determining pay levels and developing a staffing model so VAMCs can determine what staffing levels they need to more effectively operate their SPS programs. VA expects VHA to complete all of these actions by July 2019 or earlier. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees and the Secretary of Veterans Affairs. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Sharon M. Silas at (202) 512-7114 or silass@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs can be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Top VHA Reusable Medical Equipment Issues among Select Veterans Affairs Medical Centers, Fiscal Year 2017 Our review of the 27 fiscal year 2017 inspections of VAMCs conducted by Veterans Integrated Service Networks (VISN) for which VHA did not have inspection reports identified a number of common reusable medical equipment (RME) issues among the select VAMCs. The top 10 are listed in table 2 below. Appendix II: Percentage of Issue Briefs Related to Reusable Medical Equipment by Category, Fiscal Years 2015-2017 Our review of the Veterans Health Administration (VHA) summary of issue briefs for fiscal years 2015 through 2017 identified three major categories of issues related to reusable medical equipment (RME). See table 3 below for the percentage of all issue briefs that fell into each of these three categories. Appendix III: Comments from the Department of Veterans Affairs Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Karin Wallestad (Assistant Director), Teresa Tam (Analyst-in-Charge), Kenisha Cantrell, Michael Zose, and Krister Friday made major contributions to this report. Also contributing were Kaitlin Farquharson, Diona Martyn, and Muriel Brown.
Why GAO Did This Study VHA operates one of the largest health care delivery systems in the nation, serving over 9 million enrolled veterans. In providing health care services to veterans, VAMCs use RME which must be reprocessed—that is, cleaned, disinfected, or sterilized—between uses. Improper reprocessing of RME can negatively affect patient care. To help ensure the safety of veterans, VHA policy establishes requirements VAMCs must follow when reprocessing RME and requires a number of related oversight efforts. GAO was asked to review VHA's reprocessing of RME. This report examines (1) VHA's oversight of VAMCs' adherence to RME policies and (2) challenges VAMCs face in operating their Sterile Processing Services programs, and any efforts by VHA to address these challenges. GAO reviewed relevant VHA documents including RME policies and VISN inspection results for fiscal year 2017. GAO interviewed officials from VHA, all 18 VISNs, and four VAMCs, selected based on geographic variation, VAMC complexity, and data on operating room delays. GAO examined VHA's oversight in the context of federal internal control standards on communication, monitoring, and information. What GAO Found GAO found that the Department of Veterans Affairs' (VA) Veterans Health Administration (VHA) does not have reasonable assurance that VA Medical Centers (VAMC) are following policies related to reprocessing reusable medical equipment (RME). Reprocessing involves cleaning, sterilizing, and storing surgical instruments and other RME, such as endoscopes. VHA has not ensured that all VAMCs' RME inspections have been conducted because it has incomplete information from the annual inspections by Veterans Integrated Service Networks (VISN), which oversee VAMCs. For fiscal year 2017, VHA did not have 39 of the 144 VISN reports from the VISNs' inspections of their VAMCs' Sterile Processing Services departments. VISNs were able to provide GAO with evidence that they had conducted 27 of the 39 missing inspections; top areas of non-adherence in these inspections were related to quality and training, among other things. Although VHA has ultimate oversight responsibility, a VHA official told GAO that VHA had not been aware it lacked complete inspection results because it has largely relied on the VISNs to ensure complete inspection result reporting. Without analyzing and sharing complete information from inspections, VHA does not have assurance that its VAMCs are following RME policies designed to ensure that veterans receive safe care. GAO also found that VAMCs face challenges operating their Sterile Processing Services programs—notably, addressing workforce needs. Almost all of the officials from all 18 VISNs and selected VAMCs GAO interviewed reported Sterile Processing Services workforce challenges, such as lengthy hiring timeframes and limited pay and professional growth potential. According to officials, these challenges result in programs having difficulty maintaining sufficient staffing. VHA officials told GAO that the office is considering studying Sterile Processing Services staffing at VAMCs, although VHA does not have definitive plans to do so. VHA's Sterile Processing Services workforce challenges pose a potential risk to VAMCs' ability to ensure access to sterilized medical equipment, and VHA's failure to address this risk is inconsistent with standards for internal control in the federal government. Until VHA examines these workforce needs, VHA won't know whether or to what extent the reported challenges adversely affect VAMCs' ability to effectively operate their Sterile Processing Services programs and ensure access to safe care for veterans. What GAO Recommends GAO is making three recommendations to VHA, including that it ensure all RME inspections are being conducted and complete results reported, and that it examine Sterile Processing Services workforce needs and make adjustments, as appropriate. VA concurred with these recommendations.
gao_GAO-19-63
gao_GAO-19-63_0
Background The federal government obligates tens of billions annually on IT. Prior IT expenditures, however, have too often produced failed projects—that is, projects with multimillion dollar cost overruns and schedule delays and with questionable mission-related achievements. In our 2017 high risk series update, we reported that improving the management of IT acquisitions and operations remains a high risk area because the federal government has spent billions of dollars on failed IT investments. Awarding Contracts and Orders Noncompetitively Agencies are generally required to use full and open competition— meaning all responsible sources are permitted to compete—when awarding contracts. However, the Competition in Contracting Act of 1984 recognizes that full and open competition is not feasible in all circumstances and authorizes contracting without full and open competition under certain conditions. In addition, there are competition- related requirements for other types of contract vehicles, including multiple award indefinite-delivery/indefinite-quantity (IDIQ) contracts and the General Services Administration’s (GSA) Federal Supply Schedule (FSS). The rules regarding exceptions to full and open competition and other competition-related requirements are outlined in various parts of the Federal Acquisition Regulation (FAR). For example: Contracting officers may award a contract without providing for full and open competition if one of seven exceptions listed in FAR Subpart 6.3 apply. Examples of allowable exceptions include circumstances when products or services required by the agency are available from only one source, when disclosure of the agency’s need would compromise national security, or when the need for products and services is of such an unusual and compelling urgency that the federal government faces the risk of serious financial or other injury. Generally, exceptions to full and open competition under FAR subpart 6.3 must be supported by written justifications that contain sufficient facts and rationale to justify use of the specific exception. Depending on the proposed value of the contract, the justifications require review and approval at successively higher approval levels within the agency. Contracting officers are also authorized to issue orders under multiple award IDIQ contracts noncompetitively. Generally contracting officers must provide each IDIQ contract holder with a fair opportunity to be considered for each order unless exceptions apply. Contracting officers who issue orders over certain thresholds under an exception to fair opportunity are required to provide written justification for doing so. In April 2017 we found that government-wide, more than 85 percent of all order obligations under multiple-award IDIQ contracts were competed from fiscal years 2011 through 2015. Orders placed under GSA’s FSS program are also exempt from FAR part 6 requirements. However, ordering procedures require certain FSS orders exceeding the simplified acquisition threshold to be placed on a “competitive basis,” which includes requesting proposals from as many schedule contractors as practicable. If a contracting officer decides not to provide opportunity to all contract holders when placing an FSS order over the simplified acquisition threshold, that decision must be documented and approved. The FAR allows for orders to be placed under these circumstances based on the following justifications: when an urgent and compelling need exists; when only one source is capable of providing the supplies or services because they are unique or highly specialized; when in the interest of economy and efficiency, the new work is a logical follow-on to an original FSS order that was placed on a “competitive basis;” and when an item is “peculiar to one manufacturer.” Agencies may also award contracts on a sole-source basis in coordination with the Small Business Administration (SBA) to eligible 8(a) program participants. While agencies are generally not required to justify these sole-source awards, contracts that exceed a total value of $22 million require a written justification in accordance with FAR Subpart 6.3. Bridge Contracts In certain situations, it may become evident that services could lapse before a subsequent contract can be awarded. In these cases, because of time constraints, contracting officers generally use one of two options: (1) extend the existing contract or (2) award a short-term stand-alone contract to the incumbent contractor on a sole-source basis to avoid a lapse in services. While no government-wide definition of bridge contracts exists, we developed the following definitions related to bridge contracts that we used for our October 2015 report: Bridge contract. An extension to an existing contract beyond the period of performance (including base and option years), or a new, short-term contract awarded on a sole-source basis to an incumbent contractor to avoid a lapse in service caused by a delay in awarding a follow-on contract. Predecessor contract. The contract in place prior to the award of a bridge contract. Follow-on contract. A longer-term contract that follows a bridge contract for the same or similar services. This contract can be competitively awarded or awarded on a sole-source basis. Contracts, orders, and extensions (both competitive and noncompetitive) are included in our definition of a “bridge contract” because the focus of the definition is on the intent of the contract, order, or extension. DOD and some of its components, including the Navy, the Defense Logistics Agency (DLA), and the Defense Information Systems Agency (DISA), have established their own bridge contract definitions and policies. Congress enacted legislation in 2017 that established a definition of “bridge contracts” for DOD and its components. For the purposes of this report, we use the same definition as we used in our October 2015 report to define bridge contracts, unless otherwise specified. We acknowledge that in the absence of a government-wide definition, agencies may have differing views of what constitutes a bridge contract. We discuss these views further in the body of this report. In our October 2015 report on bridge contracts, we found that the agencies included in our review—DOD, HHS, and the Department of Justice—had limited or no insight into their use of bridge contracts. In addition, we found that while bridge contracts are typically envisioned as short term, some bridge contracts included in our review involved one or more bridges that spanned multiple years—potentially undetected by approving officials. The fact that the full length of a bridge contract, or multiple bridge contracts for the same requirement, is not readily apparent from documents that may require review and approval, such as an individual J&A, presents a challenge for those agency officials responsible for approving the use of bridge contracts. Approving officials signing off on individual J&As may not have insight into the total number of bridge contracts that may be put in place by looking at individual J&As alone. In October 2015, we recommended that the Administrator of the Office of Federal Procurement Policy (OFPP) take the following two actions: (1) take appropriate steps to develop a standard definition for bridge contracts and incorporate it as appropriate into relevant FAR sections; and (2) as an interim measure until the FAR is amended, provide guidance to agencies on: a definition of bridge contracts, with consideration of contract extensions as well as stand-alone bridge contracts; and suggestions for agencies to track and manage their use of these contracts, such as identifying a contract as a bridge in a J&A when it meets the definition, and listing the history of previous extensions and stand-alone bridge contracts. OFPP concurred with our recommendation to provide guidance to agencies on bridge contracts, and stated its intention is to work with members of the FAR Council to explore the value of incorporating a definition of bridge contracts in the FAR. As of November 2018, OFPP had not yet implemented our recommendations but has taken steps to develop guidance on bridge contracts. Specifically, OFPP staff told us they have drafted management guidance, which includes a definition of bridge contracts, and provided it to agencies’ Chief Acquisition Officers and Senior Procurement Executives for review. OFPP staff told us they received many comments on the draft guidance and were in the process of addressing those comments. Agencies Obligated More than $10 Billion Annually for Information Technology on Noncompetitively Awarded Contracts and Orders, but Unreliable Data Obscures Full Picture Federal agencies reported annually obligating between $53 billion in fiscal year 2013 to $59 billion in fiscal year 2017 on IT-related products and services. Of that amount, agencies reported that more than $15 billion each year—or about 30 percent of all obligations for IT products and services—were awarded noncompetitively. However, in a generalizable sample of contracts and orders, we found significant errors in certain types of orders, which call into question the reliability of competition data associated with roughly $3 billion per year in obligations. As a result, the actual amount agencies obligated on noncompetitive contract awards for IT products and services is unknown. IT Contract Obligations Totaled More than $50 Billion Annually From fiscal years 2013 through 2017, we found that total IT obligations reported by federal agencies ranged from nearly $53 billion in fiscal year 2013 to $59 billion in fiscal year 2017. The amount obligated on IT products and services generally accounted for about one-tenth of total federal contract spending (see figure 1). For fiscal years 2013 through 2017, the three agencies we reviewed in more depth—DOD, DHS and HHS––collectively accounted for about two- thirds of federal IT spending (see figure 2). Agencies Reported Obligating More than $15 Billion on Noncompetitive Contracts for IT Annually, but Full Extent of Noncompetitive Dollars Is Not Known Due to Unreliable Data From fiscal years 2013 through 2017, agencies reported in FPDS-NG obligating more than $15 billion—about 30 percent of all annual IT obligations—each year on noncompetitively awarded contracts and orders. We determined, however, that the agencies’ reporting of certain competition data was unreliable (see figure 3). Specifically, we found that contracting officers miscoded 22 out of 41 orders in our sample, of which 21 cited “follow-on action following competitive initial action” or “other statutory authority” as the legal authority for using an exception to fair opportunity. DOD contracting officers had miscoded 11 of the 21 orders, while DHS and HHS contracting officers had miscoded 4 and 6 orders, respectively. This miscoding occurred at such a high rate that it put into question the reliability of the competition data on orders totaling roughly $3 billion per year in annual obligations. In each of these cases, contracting officers identified these orders as being noncompetitively awarded when they were, in fact, competitively awarded. As an assessment of the extent to which contracts and orders that were identified as being competitively awarded were properly coded was outside the scope of our review, we are not in a position to assess the overall reliability of competition information of IT-related contracts. For these 21 orders, we found that DHS was aware of issues surrounding most of their miscodings and had taken actions to fix the problems, while DOD and HHS generally had limited insights as to why these errors occurred. DHS miscoded 4 orders, 3 of which were orders awarded under single award contracts. DHS officials told us that orders issued from single award contracts should inherit the competition characteristics of the parent contract. However, as FPDS-NG currently operates, contracting officers have the ability to input a different competition code for these orders. In this case, each of the single award contracts was competitively awarded and therefore all the subsequent orders issued from these contracts should be considered competitively awarded, as there are no additional opportunities for competition. DHS has taken actions to address this issue. DHS officials stated that in conjunction with DOD they have asked GSA, which manages the FPDS-NG data system, to modify FPDS-NG to automatically prefill competition codes for orders awarded under single award contracts. DHS officials noted that GSA expects to correct the issue in the first quarter of fiscal year 2019, which should mitigate the risks of agencies miscoding orders issued under single award contracts in the future. DHS officials have also provided training to their contracting personnel that single award orders must inherit the characteristics of the parent contract. DOD and HHS officials, on the other hand, had limited insights as to why their orders were miscoded. For example, DOD miscoded a total of 11 orders (5 orders awarded under single award contracts and 6 awarded under multiple award contracts). For 8 of these orders, contracting officers did not provide the reasons as to why these errors occurred. For the remaining 3 orders awarded—each of which were issued under single award contracts—contracting officials told us that they had used the “follow-on action following competitive initial action” because the underlying contract had been competed. Similarly, at HHS, which miscoded a total of 6 orders (4 awarded under single award contracts and 2 awarded under multiple award contracts), component officials told us that these errors were accidental and could not provide any additional insight as to why these errors were made. While GSA’s changes in the FPDS-NG system, when implemented, may help address the issue of miscoding competition data on orders issued from single award contracts, it will not address errors in coding for multiple award orders that cited exceptions to competition even when they were competed. The FAR notes that FPDS-NG data are used in a variety of ways, including assessing the effects of policies and management initiatives, yet we have previously reported on the shortcomings of the FPDS-NG system, including issues with the accuracy of the data. Miscoding of competition requirements may hinder the accomplishment of certain statutory, policy, and regulatory requirements. For example, The FAR requires agency competition advocates, among other duties and responsibilities, to prepare and submit an annual report to their agencies’ senior procurement executive and chief acquisition officer on actions taken to achieve full and open competition in the agency and recommend goals and plans for increasing competition. OMB required agencies to reduce their reliance on noncompetitive contracts, which it categorized as high-risk, because, absent competition, agencies must negotiate contracts without a direct market mechanism to help determine price. Federal internal control standards state that management should use quality information to achieve an entity’s objectives. Without identifying the reasons why contracting officers are miscoding these orders in FPDS-NG, DOD and HHS are unable to take action to ensure that competition data are accurately recorded, and are at risk of using inaccurate information to assess whether they are achieving their competition objectives. After excluding the $3 billion in annual obligations we determined was not sufficiently reliable, we found that from fiscal years 2013 through 2017 about 90 percent of noncompetitive IT obligations reported in FPDS-NG were used to buy services, hardware, and software (see figure 4). Services include the maintenance and repair of IT equipment as well as professional technology support. Hardware includes products such as fiber optic cables and computers, and software includes items such as information technology software and maintenance service plans. Agencies Cited That Only One Contractor Could Meet the Need or Small Business Requirements as Most Common Reasons for Awarding Noncompetitive Contracts The documentation for the contracts and orders at the three agencies we reviewed generally cited either that only one source could meet their needs or that they were awarding the contract sole-source to an 8(a) small business participant when noncompetitively awarding IT contracts or orders. Specifically, based on our generalizable sample, we estimate that nearly 60 percent of fiscal year 2016 noncompetitive contracts and orders at DOD, DHS, and HHS were awarded noncompetitively because agencies cited that only one contractor could meet the need, and approximately 26 percent of contracts and orders were awarded sole- source to an 8(a) small business participant. We estimate that agencies cited a variety of other reasons for not competing approximately 16 percent of noncompetitive contracts and orders, such as unusual and compelling urgency, international agreement, and national security. Within our sample of 142 contracts and orders, we analyzed J&As or similar documents to obtain additional detail as to why the contracts and orders were awarded noncompetitively. See table 2 for a breakdown of the overall reasons cited for awarding contracts noncompetitively within our sample. For 79 of the 142 contracts and orders we reviewed, agencies cited that only one source could meet the need. We found that this exception was the most commonly cited reason for a sole-source IT contract or order at DOD and DHS, but not at HHS. At HHS, the most common reason was that the contract or order was awarded on a sole source basis to an 8(a), which we discuss in more detail later. Agencies justified use of the “only one source” exception on the basis that the contractor owned the proprietary technical or data rights; the contractor had unique qualifications or experience; compatibility issues; or that a brand-name product was needed (see figure 5). The following examples illustrate the reasons cited by the agencies as to why only one contractor could meet their needs: Proprietary data rights issues and compatibility issues. The Navy issued a 9-month, approximately $350,000 order under an IDIQ contract for two data terminal sets. The terminal sets, which according to Navy officials, have been used by the Navy since the 1990s to exchange radar tracking and other information among airborne, land- based, and ship-board tactical data systems and with certain allies. The Navy’s J&A document noted that the contractor owned the proprietary data rights to the transmitting equipment and software, and the Navy required the equipment to be compatible and interchangeable with systems currently fielded throughout the Navy. Furthermore, the document noted that seeking competition through the development of a new source would result in additional costs that would far exceed any possible cost savings that another source could provide and would cause unacceptable schedule delays. This example illustrates that decisions the program officials make during the acquisition process to acquire or not acquire certain rights to technical data can have far-reaching implications for DOD’s ability to sustain and competitively procure parts and services for those systems, as we have previously reported. In our May 2014 report on competition in defense contracting, we found that 7 of 14 justifications we reviewed explained that the awards could not be competed due to a lack of technical data. All 7 of these justifications or supporting documents described situations, ranging from 3 to 30 years in duration, where DOD was unable to conduct a competition because data rights were not purchased with the initial award. We recommended in May 2014 that DOD ensure that existing acquisition planning guidance promotes early vendor engagement and allows both the government and vendors adequate time to prepare for competition. DOD concurred with our recommendation. In April 2015, DOD updated its acquisition guidance to incorporate new guidelines for creating and maintaining a competitive environment. These guidelines emphasize acquisition planning steps including involvement with industry in obtaining feedback on draft solicitations, market research, and requirements development. Unique qualifications and experience. DHS placed four separate orders under an IDIQ contract for data center support totaling approximately $7 million. The requirement was to maintain mission critical services during a data center support pilot, prototype, and transition period starting in fiscal year 2015. Among other things, DHS’s J&A noted that no other contractors had sufficient experience with DHS’s infrastructure and requirements necessary to maintain services at the required level during the transition period. HHS awarded an approximately $4 million contract to buy support services for an IT center for a 12-month ordering period, including options. HHS’s J&A noted that only the incumbent contractor had the requisite knowledge and experience to operate and maintain the mission and business systems in the IT center during the transition of operations from one location to another. The justification further stated that HHS had no efforts underway to increase competition in the future as this requirement is not anticipated to be a recurring requirement. Program officials stated that they are migrating from legacy IT systems to a new commercial off-the-shelf system. Brand-name products. DOD awarded a 5-month, approximately $500,000 contract for brand name equipment and installation that supported various video-teleconference systems. The J&A stated that this particular brand name product was the only product that would be compatible with current configurations installed in one of its complexes. To increase competition in the future, the J&A stated that technical personnel will continue to evaluate the marketplace for commercially available supplies and installation that can meet DOD’s requirements. For 42 of the 142 contracts and orders we reviewed, we found that agencies awarded a sole-source contract or order to 8(a) small business participants. HHS awarded 13 of its 23 sole-source contracts and orders we reviewed to 8(a) small business participants, DOD awarded 25 of 95, and DHS 4 of 24. We found that all contracts and orders in our review that were awarded on a sole-source basis to 8(a) small business participants were below the applicable competitive thresholds or otherwise below the FAR thresholds that require a written justification. As previously discussed, agencies may award contracts on a sole-source basis to eligible 8(a) participants, either in coordination with SBA or when they are below the competitive threshold. While agencies are generally not required to justify these smaller dollar value sole-source 8(a) awards, contracts that exceed a total value of $22 million require a written justification. Since none of the 8(a) sole source contracts and orders in our review required written justifications, the contract files generally did not provide the rationale behind the sole-source award. Policy and contracting officials from all three agencies we reviewed stated they made sole-source awards to 8(a) small business participants to help meet the agency’s small business contracting goals and save time. HHS officials further stated that they consider their awards to 8(a) small business participants a success because they are supporting small businesses. Officials stated that once a requirement is awarded through the 8(a) program, the FAR requires that requirement be set aside for an 8(a) contractor unless the requirement has changed or that an 8(a) contractor is not capable or available to complete the work. For 23 of the 142 contracts and orders we reviewed, we found that agencies cited other reasons for awarding contracts and orders noncompetitively. For example: Urgent and compelling need. DHS’s Coast Guard awarded an approximately 10-month, $6.5 million order (encompassing all options) for critical payroll services in its human resources management system under a GSA federal supply schedule contract. The Coast Guard justified the award based on an urgent and compelling need. A Coast Guard official explained that the efforts to competitively award a follow-on contract had been delayed as the Coast Guard had not developed a defined statement of work in a timely manner, and that the agency had received a larger number of proposals than initially anticipated. Therefore, the evaluation process took longer than expected. In addition, the Coast Guard’s competitive follow-on contract, which was awarded in June 2018, was protested. In October 2018, GAO denied the protest and the Coast Guard is currently planning to transition to the newly awarded contract. International agreement. The Army placed an approximately 8- month, $1 million order under an IDIQ contract for radio systems and cited international agreement as the reason for a noncompetitive award. This order was part of a foreign military sales contract with the Government of Denmark. Authorized or required by statute. The Defense Logistics Agency (DLA) cited “authorized or required by statute” when it placed an approximately $1.5 million, 12-month order under an IDIQ contract for sustainment support services for an application that is used for planning and initiating contracting requirements in contingency environments. DLA noted that this model was contracted under the Small Business Innovation Research Program, which supports scientific and technological innovation through the investment of federal research funds into various research projects. National security. The U.S. Special Operations Command (SOCOM) placed an approximately 8-month, $1 million order for radio spare parts and cited national security as the reason for a noncompetitive award. An Estimated Eight Percent of Fiscal Year 2016 IT Noncompetitive Contracts and Orders Were Bridges, and Agencies Have Difficulty Managing Them An Estimated Eight Percent of IT Noncompetitive Contracts and Orders in Fiscal Year 2016 Were Bridge Contracts We estimate that about 8 percent of contracts and orders above $150,000 in fiscal year 2016 at DOD, DHS, and HHS were bridge contracts. Consistent with our October 2015 findings, agencies we reviewed face continued challenges with oversight of bridge contracts, based on 15 contracts and orders we reviewed in-depth. For example, we found that in 9 of the 15 cases, bridge contracts were associated with additional bridges not apparent in the documentation related to the contract and order we reviewed, such as a J&A, and corresponded with longer periods of performance and higher contract values than initially apparent. Agency officials cited a variety of reasons for needing bridge contracts, including acquisition planning challenges, source selection challenges, and bid protests. Based on our generalizable sample, we estimate that about 8 percent of contracts and orders above $150,000 in fiscal year 2016 at DOD, DHS, and HHS were bridge contracts. We verified, using our definition of bridge contracts as criteria, that 13 of 142 contracts and orders in our generalizable sample were bridge contracts based on reviews of J&As, limited source justifications, or exceptions to fair opportunity, among other documents. In addition, we found two additional bridge contracts related to our generalizable sample while conducting our in-depth review, bringing the total number of bridge contracts we identified during this review to 15. Agencies Face Continued Challenges with Oversight of Bridge Contracts We found that the bridge contracts we reviewed were often longer than initially apparent from our review of related documentation, such as a J&A, and sometimes spanned multiple years. Bridge contracts can be a useful tool in certain circumstances to avoid a gap in providing products and services, but they are typically envisioned to be used for short periods of time. When we conducted an in-depth review of the bridge contracts, such as by reviewing the contract files for the predecessor, bridge, and follow-contracts, we found that in most cases, these involved one or more bridges that spanned longer periods and corresponded with higher contract values than initially apparent. Specifically, we found that 9 of the 15 bridge contracts had additional bridges related to the same requirement that were not initially apparent from documents requiring varying levels of approval by agency officials, such as the J&As. Collectively, agencies awarded bridge contracts associated with these 15 contracts and orders with estimated contract values of about $84 million (see table 3). The following examples illustrate contracts we reviewed in which the periods of performance were longer than initially apparent: HHS’s Indian Health Service (IHS) awarded a 4-month, approximately $1.6 million bridge order for project management and support services for IHS’s resource and patient management system. We found, however, that the predecessor contract had already been extended by 6 months before the award of the bridge order due to acquisition planning challenges associated with delays in developing the acquisition package for the follow-on contract. Subsequently, the 4- month bridge order was extended for an additional 6 months, in part because the follow-on award—which had been awarded to a new contractor—was protested by the incumbent contractor due to concerns over proposal evaluation criteria. Ultimately, the protest was dismissed. Following the resolution of the bid protest, officials awarded an additional 2-month bridge order for transition activities. In total, the bridge orders and extensions spanned 18 months and had an estimated value of about $4.7 million. Figure 6 depicts the bridge orders and extensions and indicates the 4-month bridge and 6-month extension we had initially identified. The Air Force awarded a 3-month, approximately $630,000 bridge contract to support a logistics system used to monitor weapon system availability and readiness. We found, however, that the Air Force had previously awarded a 3-month bridge contract due to delays resulting from a recent reorganization, which, according to Air Force officials, made it unclear which contracting office would assume responsibility for the requirement. The Air Force subsequently awarded an additional 3-month bridge contract due to acquisition planning challenges, such as planning for the award of the follow-on sole- source contract. The total period of performance for the bridges was 9 months with an estimated value of about $1.9 million (see figure 7). As of August 2018, 13 of the 15 bridge contracts had follow-ons in place—5 were awarded competitively and 8 were awarded noncompetitively. Two bridge contracts do not currently have follow-on contracts in place for various reasons. For example, in one instance, the Coast Guard’s requirement for human resources and payroll support services has continued to operate under a bridge contract because the Coast Guard’s planned follow-on contract—a strategic sourcing IDIQ— was awarded in June 2018, and subsequently protested, among other delays. Officials Frequently Cited Acquisition Planning Challenges as Necessitating the Use of a Bridge Contract Based on our reviews of contract documentation and information provided by agency officials, we found that acquisition planning challenges were the principal cause for needing to use a bridge contract across the 15 bridge contracts we reviewed. In particular, acquisition packages prepared by program offices to begin developing a solicitation were often not prepared in a timely fashion. Acquisition packages include statements of work and independent government cost estimates, among other documents, and are generally prepared by the program office, with the assistance of the contracting office. In addition to acquisition planning challenges, officials cited delays in source selection and bid protests, among others, as additional reasons justifying the need to use a bridge contract (see figure 8). The following examples illustrate reasons officials cited for needing a bridge contract: DOD’s DISA awarded a bridge contract for IT support services due to acquisition planning challenges, and specifically, the late submission of acquisition packages. According to contracting officials, the bridge contract was originally intended to consolidate 3 of the previous contracts associated with this requirement, but a fourth was added much later in the process. DISA contracting officials said that the program office did not submit acquisition package documentation in a timely manner, and, once submitted, the documentation required numerous revisions. These officials added that they had to award an additional bridge contract to avoid a lapse in service once they received a completed package from the program office because there was not enough time to do a competitive source selection and analysis. DOD’s SOCOM extended an IDIQ contract for radio supplies and services due to source selection delays and acquisition workforce challenges. For example, contracting officials said they extended the IDIQ for 12 months because the contracting office was working on a source selection for the follow-on contract for modernized radios and simply did not have the manpower to award a new sustainment contract for the existing radios at the same time. DHS’s Customs and Border Protection (CBP) awarded an approximately 16-month bridge contract in June 2016 for engineering and operations support of CBP’s Oracle products and services due to bid protests associated with March 2016 orders for this requirement. We found the protests were filed on the basis that CBP had issued the task order on a sole-source basis, which precluded other contractors from competing for the award. GAO dismissed the protest in May 2016 as a result of CBP’s stated intent to terminate the task order and compete the requirement as part of its corrective action plan. According to CBP contracting officials, they awarded the approximately16-month bridge contract to the incumbent contractor to continue services until GAO issued a decision and the services could be transitioned to the awardee. In September 2017, CBP officials awarded the competitive follow-on contract to a new vendor, but this award was also protested due to alleged organizational conflicts of interest, improperly evaluated technical proposals, and an unreasonable best-value tradeoff determination. As a result, CBP officials issued a stop-work order effective October 2017. To continue services during the protest, CBP officials extended the existing bridge contract by 3 months and then again by another 6 months. In January 2018, GAO dismissed the protest in its entirety and the stop-work order was lifted. According to a CBP contracting official, CBP did not exercise the final 3 months of options of the 6-month extension. In 2015, we found that the full length of a bridge contract, or multiple bridge contracts, is not always readily apparent from review of an individual J&A, which presents challenges for approving officials, as they may not have insight into the total number of bridges put into place by looking at individual J&As alone. We found a similar situation in our current review. For example, the J&As for the 8 bridge contracts with J&As did not include complete information on the periods of performance or estimated values of all related bridge contracts. In the Absence of Government-wide Guidance, Others Have Taken Steps to Define Bridge Contracts OFPP has not yet taken action to address the challenges related to the use of bridge contracts that we found in October 2015. At that time, we recommended that OFPP take appropriate steps to develop a standard definition of bridge contracts and incorporate it as appropriate into relevant FAR sections, and to provide guidance to federal agencies in the interim. We further recommended that the guidance include (1) a definition of bridge contracts, with consideration of contract extensions as well as stand-alone bridge contracts, and (2) suggestions for agencies to track and manage their use of these contracts, such as identifying a contract as a bridge in a J&A when it meets the definition, and listing the history of previous extensions and stand-alone bridge contracts back to the predecessor contract in the J&A. However, as of November 2018, OFPP had not yet done so. As a result, agencies continue to face similar challenges with regard to the use of bridge contracts that we identified in 2015 and there is a lack of government-wide guidance that could help to address them. In the absence of a federal government-wide definition, others have taken steps to establish a bridge contracts definition. For example, Congress has established a statutory definition of bridge contracts that is applicable to DOD and its components. Specifically, Section 851 of the National Defense Authorization Act for Fiscal Year 2018 defined a bridge contract as (1) an extension to an existing contract beyond the period of performance to avoid a lapse in service caused by a delay in awarding a subsequent contract; or (2) a new short-term contract awarded on a sole- source basis to avoid a lapse in service caused by a delay in awarding a subsequent contract. Section 851 requires that, by October 1, 2018, the Secretary of Defense is to ensure that DOD program officials plan appropriately to avoid the use of a bridge contract for services. In instances where bridge contracts were awarded due to poor acquisition planning, the legislation outlines notification requirements with associated monetary thresholds for bridge contracts. Acting on this requirement and in response to our prior bridge contracts report, DOD established a bridge contracts policy memorandum in January 2018. The policy defines bridge contracts as modifications to existing contracts to extend the period of performance, increase the contract ceiling or value or both, or a new, interim sole-source contract awarded to the same or a new contractor to cover the timeframe between the end of the existing contract and the award of a follow-on contract. The DOD policy excludes extensions awarded using the option to extend services clause as bridge contracts unless the extension exceeds 6 - months. In addition, DOD’s bridge contract policy directs the military departments and DOD components to develop a plan to reduce bridge contracts and to report their results annually to the Office of the Under Secretary of Defense for Acquisition and Sustainment. As of August 2018, DHS and HHS did not have component- or department-level policies that define or provide guidance on the use of bridge contracts. Differing definitions of bridge contracts can lead to varying perspectives as to what constitutes a bridge contract. For example: Differing views on whether a contract within the 8(a) program can be a bridge. In one instance, we reviewed a 3-month, approximately $1.9 million bridge contract that DLA awarded to the incumbent contractor for a variety of IT contractor support services for DLA’s Information Operations (J6). This bridge contract was awarded to continue services until DLA could award a 12-month, roughly $2.9 million sole- source contract (including all options) to an 8(a) small business participant to consolidate tasks from 20 contracts as part of a reorganization effort within J6. After that contract expired, DLA awarded a second 12-month, about $3 million contract (including all options) to the same 8(a) small business participant to continue these task consolidation efforts. DLA subsequently awarded a 2-month $122,000 contract extension to continue services until it could award a follow-on order under DLA’s J6 Enterprise Technology Services (JETS) multiple award IDIQ contract, the award of which had also been delayed. Although the 8(a) contracts were not awarded to the incumbent of the initial 3-month bridge, we believe that these contracts could be considered bridge contracts as they were meant to bridge a gap in services until the reorganization efforts were complete and the JETS contract was awarded. DLA contracting officials, however, told us they do not consider the 8(a) contracts to be bridge contracts as these two contracts and the follow-on task order under JETS were awarded sole-source to 8(a) small business participants. DLA officials added that they plan to keep the requirement in the 8(a) program. Differing views as to whether contract extension are bridges. DOD’s policy generally does not include contract extensions using the “option to extend services” clause as bridges, unless the option is extended beyond the 6 months allowed by the clause. Navy policy, however, states that using the option to extend services clause is considered a bridge if the option was not priced at contract award. Similarly, HHS officials stated that the department does not consider contract extensions using the “option to extend services” clause to be bridge contract actions if the total amount of the services covered are evaluated in the initial award, and if the length does not extend beyond the allowable 6 months. The differences among agencies’ views and policies may be due to the extent to which the extensions are considered “competitive”. For the purposes of our definition, if the extension—whether it was competed or not—was used to bridge a gap in service until a follow-on contract could be awarded, then it would be considered a bridge. Without agreement as to what constitutes a bridge contract, agencies’ efforts to improve oversight of and to identify challenges associated with the use of bridge contracts will be hindered. While we are not making any new recommendations in this area, we continue to believe that our October 2015 recommendation to OFPP to establish a government-wide definition and provide guidance to agencies on their use remains valid. New Definition Narrows Scope of Legacy IT Noncompetitive Contracts and Orders to About Seven Percent An estimated 7 percent of IT noncompetitive contracts and orders at selected agencies in fiscal year 2016 were in support of legacy IT systems as newly defined in statute, which is considerably fewer than we found when using the previous definition of legacy IT. At the time our review began, OMB’s draft definition for legacy IT systems stated that legacy IT spending was spending dedicated to maintaining the existing IT portfolio, excluding provisioned services such as cloud. Using this definition, and based on our generalizable sample, we estimated that about 80 percent of IT noncompetitive contracts and orders over $150,000 in fiscal year 2016 at DOD, DHS, and HHS were awarded in support of legacy IT systems. In December 2017, however, Congress enacted the Modernizing Government Technology Act (MGT) as part of the National Defense Authorization Act for Fiscal Year 2018. This act defined a legacy IT system as an “outdated or obsolete system of information technology.” Using this new statutory definition of a legacy IT system, we requested that each agency reassess how it would characterize the nature of the IT system using the revised definition provided under the MGT Act. For the 142 contracts and orders we reviewed, we found that when using the new definition, agencies significantly reduced the number of contracts and orders identified as supporting legacy IT systems. For example, using the OMB draft definition agencies identified that 118 out of 142 contracts and orders were supporting legacy IT systems. However, when using the more recent MGT Act definition, agencies identified only 10 out of 137 contracts and orders as supporting legacy IT systems (see figure 9). Consequently, using the definition provided under the MGT Act, we estimate that about 7 percent of IT noncompetitive contracts and orders over $150,000 in fiscal year 2016 at DOD, DHS, and HHS were awarded in support of outdated or obsolete legacy IT systems. Agencies’ program officials said that they are still supporting outdated or obsolete legacy IT systems (as defined by the MGT Act) because they are needed for the mission, or they are in the process of buying new updated systems or modernizing current ones. For example: Army officials awarded a 5-year, roughly $1.2 million contract to install, configure, troubleshoot, and replace Land Mobile Radio equipment at Ft. Sill, Oklahoma. An Army official noted that all equipment is older than 12 years and is nearing its end of life. The radio equipment, however, is required to support first responder and emergency service personnel critical communications. An Army official did not indicate any plans to modernize, but noted that the impact of this system not being supported would significantly affect all of Fort Sill’s land mobile radio communications. The Air Force awarded a $218,000 order to buy repair services for the C-130H aircraft’s radar display unit and electronic flight instrument. An Air Force official noted that legacy hardware that was bought through the order is part of critical systems that are required to safely fly the aircraft. The system, however, is obsolete and the associated hardware is no longer supported by the vendor. The official told us that there is currently a re-engineering effort to modernize the systems that use this hardware. HHS issued a 12–month, nearly $2.5 million order to buy operations and maintenance support for a Food and Drug Administration (FDA) system used to review and approve prescription drug applications. According to an FDA program official, efforts are underway to retire the system by gradually transferring current business processes to a commercial-off-the-shelf solution that can better meet government needs. This official, however, told us that the system currently remains in use because FDA’s Office of New Drugs is still heavily reliant on the system. Conclusions Competition is a cornerstone of the federal acquisition system and a critical tool for achieving the best possible return on investment for taxpayers. In the case of information technology, federal agencies awarded slightly under a third of their contract dollars under some form of noncompetitive contract. Further, our current work was able to quantify that about a tenth of all information technology-related contracts and orders were made under some form of a noncompetitively awarded bridge contract, which provides new context for the issues associated with their use. The challenges themselves, however, remain much the same since we first reported on the issue in 2015. OFPP has yet to issue guidance or promulgate revised regulations to help agencies identify and manage their use of bridge contracts, and our current work finds that the full scope of bridge contracts or the underlying acquisition issues that necessitated their use in the first place may not be readily apparent to agency officials who are approving their use. We continue to believe that our 2015 recommendation would improve the use of bridge contracts, and we encourage OFPP to complete its ongoing efforts in a timely fashion. The frequency of the errors in reporting and their concentration within a specific type of contract action signals the need for more management attention and corrective action. These errors resulted in the potential misreporting of billions of dollars awarded under orders as being noncompetitively awarded when, in fact, they were competed. One agency included in our review—DHS—has taken steps to address the problems that underlie the errors in coding and provided additional training to its staff. DOD and HHS could benefit from additional insight as to the reasons behind the high rates of miscoding to improve the accuracy of this information. Recommendations for Executive Action We are making a total of two recommendations, one to DOD and one to HHS. The Secretary of Defense should direct the Under Secretary of Defense for Acquisition and Sustainment to identify the reasons behind the high rate of miscoding for orders awarded under multiple award contracts and use this information to identify and take action to improve the reliability of the competition data entered into FPDS-NG. (Recommendation 1) The Secretary of Health and Human Services should direct the Associate Deputy Assistant Secretary for Acquisition to identify the reasons behind the high rate of miscoding for orders awarded under multiple award contracts and use this information to identify and take action to improve the reliability of the competition data entered into FPDS-NG. (Recommendation 2) Agency Comments and Our Evaluation We provided a draft of this report to DOD, DHS, HHS, and OMB for review and comment. DOD and HHS provided written comments and concurred with the recommendation we made to each department. In its written response, reproduced in appendix II, DOD stated it will analyze FPDS-NG data in an effort to identify why the miscoding of orders on multiple award contracts occurs, and use the information to advise the contracting community of actions to improve the reliability of competition data. In its written response, reproduced in appendix III, HHS stated that the Division of Acquisition within HHS’s Office of Grants and Acquisition Policy and Accountability uses a data quality management platform to ensure data accuracy. HHS is currently in the process of performing the annual data validation and verification of the acquisition community’s contract data for fiscal year 2018. Once this process is complete the Division of Acquisition will contact contracting offices that produced records that were flagged as containing errors and provide recommendations that should help improve the fiscal year 2019 accuracy rating. HHS added that it will closely monitor those checks and all others to ensure contract data are accurate. However, in its letter, HHS did not specify how its annual data validation and verification process would specifically address the fact that we found a high rate of miscoding of competition data for certain orders. OMB staff informed us that they had no comments on this report. DHS, HHS and the Air Force provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Secretary of Homeland Security, the Secretary of Health and Human Services, and the Director of the Office of Management and Budget. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or dinapolit@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology Our report examines (1) the extent to which agencies used noncompetitive contracts to procure Information Technology (IT) products and services for fiscal years 2013 through 2017; (2) the reasons for using noncompetitive contracts for selected IT procurements; (3) the extent to which IT procurements at selected agencies were bridge contracts; and (4) the extent to which noncompetitive IT procurements at selected agencies were in support of legacy systems. To examine the extent to which agencies used noncompetitive contracts and orders to procure IT products and services, we analyzed government-wide Federal Procurement Data System-Next Generation (FPDS-NG) data on IT obligations from fiscal years 2013 through 2017. To define IT, we used the Office of Management and Budget’s (OMB) Category Management Leadership Council list of IT products and service codes, which identified a total of 79 IT-related codes for IT services and products. Data were adjusted for inflation to fiscal year 2017 dollars, using the Fiscal Year Gross Domestic Product Price Index. To assess the reliability of the FPDS-NG data, we electronically tested for missing data, outliers, and inconsistent coding. Based on these steps, we determined that FPDS-NG data were sufficiently reliable for describing general trends in government-wide and IT contract obligations data for fiscal years 2013 through 2017. In addition, as we later describe, we compared data for a generalizable sample of 171 noncompetitive contracts and orders to contract documentation, and we determined that 29 of these had been inaccurately coded in FPDS-NG as noncompetitive. As such, we determined that the data were not reliable for the purposes of reporting the actual amount agencies obligated on noncompetitive contracts and orders for IT products and services. Specifically, we determined, that data for IT noncompetitive obligations awarded under multiple award contracts that cited “follow-on action following competitive initial action” or “other statutory authority” as the legal authority for using an exception to fair opportunity for the Departments of Defense (DOD), Homeland Security (DHS), and Health and Human Services (HHS) in fiscal year 2016 were not reliable. Evidence from our review of this sample suggests there was a high rate of miscoding for these orders; thus, we applied these findings to the remaining agencies and fiscal years because we do not have confidence that the data were more reliable than what we had found. To determine the reasons for using noncompetitive contracts for selected IT procurements, we selected the three agencies with the highest reported obligations on IT noncompetitive contracts for fiscal years 2012 through 2016 (the most recent year of data available at the time we began our review)—DOD, DHS and HHS. These three agencies collectively accounted for about 70 percent of all noncompetitively awarded contracts for IT during this period. From these agencies, we selected a generalizable stratified random sample of 171 fiscal year 2016 noncompetitive contracts and orders for IT above the simplified acquisition threshold of $150,000. The sample was proportionate to the amount of noncompetitive contracts and orders for IT at each agency. Based on our review of documentation collected for the generalizable sample, we excluded 29 contracts and orders because they were awarded competitively, but had been miscoded as noncompetitive or as having an exception to fair opportunity. As a result, our sample consisted of 142 contracts and orders. See table 4 for a breakdown by agency. To determine the extent to which IT procurements at selected agencies were bridge contracts or in support of legacy systems, agencies provided information as to whether the contracts and orders met GAO’s definition of a bridge contract—which we defined as an extension to an existing contract beyond the period of performance (including base and option years) or a new, short-term contract awarded on a sole-source basis to an incumbent contractor to avoid a lapse in service caused by a delay in awarding a follow-on contract—and whether they met the definitions of legacy IT systems in OMB’s draft IT Modernization Initiative and the Modernizing Government Technology Act (MGT). OMB’s draft IT Modernization Initiative defined legacy systems as spending dedicated to maintaining the existing IT portfolio but excluding provisioned services, such as cloud, while the MGT Act defines them as outdated or obsolete. We verified the agencies’ determinations of whether a contract or order was a bridge by reviewing documentation, such as justification and approval and exception to fair opportunity documents, for the contracts and orders in our generalizable sample, and conducting follow-up with agency officials as needed. We verified agencies’ determination of whether or not a contract or order was in support of a legacy system, as defined in OMB’s draft IT Modernization Initiative by reviewing the agencies’ determination and comparing these determinations to additional documentation, such as the statement of work, and conducting follow-up with program officials about the nature of the requirement where needed. We verified agencies’ determination of whether a contract or order was in support of a legacy system as defined in the MGT Act by reviewing agencies’ rationale for these determinations and following up with agency officials where we identified discrepancies between the determination and rationale. To obtain additional insights into bridge contracts and legacy systems, we selected a nonprobability sample of 26 contracts and orders from our generalizable sample of 142 contracts and orders for in-depth review. We selected these contracts based on factors such as obtaining a mix of bridge contracts and other contracts used in support of legacy IT systems and location of the contract files. For our in-depth review of contracts and orders, we collected and analyzed contract file documentation for the selected contracts and orders and interviewed contracting and program officials to gain insights into the facts and circumstances surrounding the awards of IT noncompetitive contracts and orders. In cases where we selected a potential bridge contract, we also reviewed the predecessor contract, additional bridge contracts (if any), and, follow-on contract, if awarded at the time of our review. For bridge contracts and orders, we asked about the reasons why bridges were needed and the status of follow-on contracts. We verified, using the definition of bridge contracts that we developed for our October 2015 report as criteria, that 13 of 142 contracts and orders in our generalizable sample were bridge contracts based on reviews of justification and approval documents, limited source justifications, or exceptions to fair opportunity, among other documents. We acknowledge, however, that in the absence of a government-wide definition, agencies may have differing views of what constitutes a bridge contract. In addition, we found 2 additional bridge contracts not included in our generalizable sample while conducting our in-depth review. For example, we selected three noncompetitive orders from our generalizable sample for in-depth review that were used to buy accessories and maintenance for the U.S. Special Operations Command (SOCOM) PRC-152 and 117G radios. We found that although the three orders were not bridge contracts, the underlying indefinite delivery/ indefinite quantity (IDIQ) contract—which outlines the terms and conditions, including pricing for the orders—had been extended 12 months to continue services until the follow-on IDIQ could be awarded. We also selected an Air Force order for equipment for the Joint Strike Fighter instrumentation pallet for in-depth review. Further analysis revealed that the underlying IDIQ was extended for 5 additional months to continue services until officials could award a follow-on contract for this requirement. Including these 2 additional bridge contracts brings the total number of bridge contracts we identified during this review to 15. For legacy contracts and orders we asked about the nature of the requirement and plan to move to newer technologies or systems. The selection process for the generalizable sample is described in detail below. Selection Methodology for Generalizable Sample We selected a generalizable stratified random sample of 171 contracts and orders from a sample frame of 3,671 fiscal year 2016 IT noncompetitive contracts and orders, including orders under multiple award indefinite delivery/indefinite quantity contracts over $150,000 to generate percentage estimates to the population. We excluded contracts and orders with estimated values below the simplified acquisition threshold of $150,000 as these contracts have streamlined acquisition procedures. We stratified the sample frame into nine mutually exclusive strata by agency and type of award, i.e. contract, order, and multiple award order for each of the three agencies. We computed the minimum sample size needed for a proportion estimate to achieve an overall precision of at least plus or minus 10 percentage points or fewer at the 95 percent confidence level. We increased the computed sample size to account for about 10 percent of the population to be out of scope, such as competitive or non-IT contracts or orders. We then proportionally allocated the sample size across the defined strata and increased sample sizes where necessary so that each stratum would contain at least 10 sampled contracts or orders. The stratified sample frame and sizes are described in table 5 below. We selected contracts and orders from the following components: DOD: Air Force, Army, Navy, Defense Information Systems Agency, Defense Logistics Agency, Defense Security Service, Defense Threat Reduction Agency, U.S. Special Operations Command, and Washington Headquarter Services; HHS: Centers for Disease Control, Centers for Medicare and Medicaid Services, Food and Drug Administration, Indian Health Service, National Institutes of Health, and the Office of the Assistant Secretary for Administration; DHS: Federal Emergency Management Agency, Office of Procurement Operations, U.S. Citizenship and Immigration Services, U.S. Coast Guard, U.S. Customs and Border Protection, and the U.S. Secret Service. We excluded 29 contracts and orders as we determined they had been miscoded as noncompetitive or as not having an exception to fair opportunity. Based on these exclusions, we estimate the number of noncompetitive contracts and orders in this population was about 3,000 (+/- 6.7 percent). All estimates in this report have a margin of error, at the 95 percent confidence level, of plus or minus 9 percentage points or fewer. We conducted this performance audit from April 2017 to December 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Defense Appendix III: Comments from the Department of Health and Human Services Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Janet McKelvey (Assistant Director), Pete Anderson, James Ashley, Andrew Burton, Aaron Chua, Andrea Evans, Lorraine Ettaro, Julia Kennon, Miranda Riemer, Guisseli Reyes-Turnell, Roxanna Sun, Alyssa Weir, and Kevin Walsh made key contributions to this report.
Why GAO Did This Study The federal government spends tens of billions of dollars each year on IT products and services. Competition is a key component to achieving the best return on investment for taxpayers. Federal acquisition regulations allow for noncompetitive contracts in certain circumstances. Some noncompetitive contracts act as “bridge contracts”—which can be a useful tool to avoid a lapse in service but can also increase the risk of the government overpaying. There is currently no government-wide definition of bridge contracts. GAO was asked to review the federal government's use of noncompetitive contracts for IT. This report examines (1) the extent that agencies used noncompetitive contracts for IT, (2) the reasons for using noncompetitive contracts for selected IT procurements, (3) the extent to which IT procurements at selected agencies were bridge contracts, and (4) the extent to which IT procurements were in support of legacy systems. GAO analyzed FPDS-NG data from fiscal years 2013 through 2017 (the most recent and complete data available). GAO developed a generalizable sample of 171 fiscal year 2016 noncompetitive IT contracts and orders awarded by DOD, DHS, and HHS—the agencies with the most spending on IT, to determine the reasons for using noncompetitive contracts and orders, and the extent to which these were bridge contracts or supported legacy systems. What GAO Found From fiscal years 2013 through 2017, federal agencies reported obligating more than $15 billion per year, or about 30 percent, of information technology (IT) contract spending on a noncompetitive basis (see figure). GAO found, however, that Departments of Defense (DOD), Homeland Security (DHS), and Health and Human Services (HHS) contracting officials misreported competition data in the Federal Procurement Data System-Next Generation (FPDS-NG) for 22 of the 41 orders GAO reviewed. GAO's findings call into question competition data associated with nearly $3 billion in annual obligations for IT-related orders. DHS identified underlying issues resulting in the errors for its orders and took corrective action. DOD and HHS, however, had limited insight into why the errors occurred. Without identifying the issues contributing to the errors, DOD and HHS are unable to take action to ensure that competition data are accurately recorded in the future, and are at risk of using inaccurate information to assess whether they are achieving their competition objectives. GAO found that DOD, DHS, and HHS primarily cited two reasons for awarding a noncompetitive contract or order: (1) only one source could meet the need (for example, the contractor owned proprietary technical or data rights) or (2) the agency awarded the contract to a small business to help meet agency goals. GAO estimates that about 8 percent of 2016 noncompetitive IT contracts and orders at DOD, DHS, and HHS were bridge contracts, awarded in part because of acquisition planning challenges. GAO previously recommended that the Office of Federal Procurement Policy define bridge contracts and provide guidance on their use, but it has not yet done so. GAO believes that addressing this recommendation will help agencies better manage their use of bridge contracts. Additionally, GAO estimates that about 7 percent of noncompetitive IT contracts and orders were used to support outdated or obsolete legacy IT systems. Officials from the agencies GAO reviewed stated these systems are needed for their mission or that they are in the process of modernizing the legacy systems or buying new systems. What GAO Recommends GAO recommended DOD and HHS identify the reasons why competition data for certain orders in FPDS-NG were misreported and take corrective action. DOD and HHS concurred.
gao_GAO-18-295
gao_GAO-18-295_0
Background Acquiring Heavy Equipment Agencies generally acquire equipment from commercial vendors and through GSA, which contracts for the equipment from commercial vendors. In acquiring heavy equipment from a commercial vendor or GSA, agencies can purchase or lease the equipment. Generally, agencies use the term “lease” to refer to acquisitions that are time-limited and therefore distinct from purchases. The term “lease” is used to refer to both long-term and short-term leases. For example, the three agencies we reviewed in-depth use the term “rental” to refer to short-term leases of varying time periods. According to Air Force officials, they define rentals as leases that are less than 120 days while FWS and NPS officials said they generally use the term rental to refer to leases that are a year or less. For the purposes of this report, we use the term “rental” to refer to short-term leases defined as rentals by the agency and “long-term lease” to refer to a lease that is not considered a rental by the agency. (See fig. 1.) In 2013, GSA began offering heavy equipment through its Short-Term Rental program, which had previously been limited to passenger vehicles, in part to eliminate ownership and maintenance cost for infrequently used heavy equipment. Under this program, agencies can request a short-term equipment rental (less than a year) from GSA, and GSA will work with a network of commercial vendors to provide the requested heavy equipment. Heavy Equipment Reporting, Data, and Acquisition Requirements Unlike for some other types of federal property, there are no central reporting requirements for agencies’ inventories of heavy equipment. However, each federal agency is required to maintain inventory controls for its property, which includes heavy equipment. Agencies maintain inventory data through the use of agency-specific databases, and each agency can set its own requirements for what data are required and how these data are maintained. For example, while an agency may choose to maintain data in a headquarters database, it could also choose to maintain data at the local level. As another example, an agency may decide to track and maintain data on the utilization of its heavy equipment (such as the hours used) or may choose not to have such data or require any particular utilization levels. The Federal Acquisition Regulation (FAR) governs the acquisition process of executive branch agencies when acquiring certain goods and services, including heavy equipment. Under the FAR, agencies should consider whether to lease equipment instead of purchasing it based on several factors. Specifically, the FAR provides that agency officials should evaluate cost and other factors by conducting a “lease-versus-purchase” analysis before acquiring heavy equipment. Additionally, DOD’s regulations require its component agencies to prepare a justification supporting lease-versus-purchase decisions if the equipment is to be leased for more than 60 days. Twenty Agencies Own Over 136,000 Pieces of Heavy Equipment, at an Acquisition Cost of Over $7.4 Billion Agencies Report Owning Over 136,000 Pieces of Heavy Equipment of Various Types Twenty agencies reported data on their owned heavy equipment, including the (1) number, (2) types, (3) acquisition year, and (4) location of agencies’ owned heavy equipment in their inventories as of June 2017. Number The 20 agencies reported owning over 136,000 heavy equipment items. DOD reported owning most of this heavy equipment—over 100,000 items, about 74 percent. (See app. I for more information on agencies’ ownership of these items.) The Department of Agriculture reported owning the second highest number of heavy equipment items—almost 9,000 items, about 6 percent. (See fig. 2.) Four agencies—the Nuclear Regulatory Commission, the Department of Housing and Urban Development, the Office of Personnel Management, and the Agency for International Development—reported owning five or fewer heavy equipment items each. The 20 agencies reported owning various types of heavy equipment, such as cranes, backhoes, and road maintenance equipment in five categories: (1) construction, mining, excavating, and highway maintenance equipment; (2) airfield-specialized trucks and trailers; (3) self-propelled warehouse trucks and tractors; (4) tractors; and (5) soil preparation and harvesting equipment. Thirty-eight percent (almost 52,000 items) were in the construction, mining, excavating, and highway maintenance category (see fig. 3). Fifteen of the 20 agencies reported owning at least some items in this category. Twenty-four percent (over 33,000 items) were in the airfield- specialized trucks and trailers category, generally used to service and re-position aircraft on runways. DOD reported owning 99 percent (over 32,000) of these items, while 9 other agencies, including the Department of Labor and the National Aeronautics and Space Administration, reported owning the other one percent (317 items). Twenty-two percent (over 29,000 items) were in the self-propelled warehouse trucks and tractors category, which includes equipment such as forklift trucks. All 20 agencies reported owning at least one item in this category, and five agencies—the Agency for International Development, Department of Housing and Urban Development, the Environmental Protection Agency, the Nuclear Regulatory Commission, and the Office of Personnel Management—reported owning only items in this category. (For additional information on agencies’ ownership of heavy equipment in various categories, see app. I.) The twenty agencies reported acquiring their owned heavy equipment between 1944 and 2017, with an average of about 13 years since acquisition (see fig. 4). One heavy equipment manager we interviewed reported that a dump truck can last 10 to 15 years, whereas other types of equipment can last for decades if regularly used and well-maintained. The 20 agencies reported that over 117,000 heavy equipment items (86 percent) were located within the United States or its territories. Of these, about one-fifth (over 26,000) were located in California and Virginia, the two states with the most heavy equipment (see fig. 5). Of the equipment located outside of the United States and its territories, 94 percent was owned by the Department of Defense. The rest was owned by the Department of State (714 items in 141 countries from Afghanistan to Zimbabwe) and the National Science Foundation (237 items in areas such as Antarctica). Agencies Reported Spending Over $7.4 Billion to Purchase Heavy Equipment, Although Actual Costs Were Greater Than Reported The twenty agencies reported spending over $7.4 billion in 2016 dollars to acquire the heavy equipment they own (see table 1). However, actual spending was higher because this inflation-adjusted figure excludes over 37,000 heavy equipment items for which the agencies did not report acquisition cost or acquisition year, or both. Without this information, we could not determine the inflation-adjusted cost and therefore did not include the cost of these items in our calculation. The Army owns almost all of these items, having not reported acquisition cost or acquisition year, or both, for 36,589 heavy equipment items because, according to Army officials, the data were not available centrally but may have been available at individual Army units and would have been resource- intensive to obtain. The heavy equipment items reported by the 20 agencies ranged in acquisition cost from zero dollars to over $2 million in 2016 dollars, with an average acquisition cost in 2016 dollars of about $78,000, excluding assets with a reported acquisition cost of $0. Of the items which we adjusted to 2016 dollars and for which non-zero acquisition costs were provided: 94 percent cost less than $250,000 and accounted for 57 percent of the total adjusted acquisition costs (See fig. 6.) 6 percent of items cost more than $250,000 and accounted for 43 percent of the adjusted acquisition costs. (See fig. 6.) High-cost items included a $779,000 hydraulic crane acquired by the National Aeronautics and Space Administration in 1997 ($1.2 million in 2016 dollars), a $1.4 million ultra-deep drilling simulator acquired by the Department of Energy in 2009 ($1.6 million in 2016 dollars), and several $2.2 million well-drilling machines acquired by the Air Force in 2013 ($2.3 million in 2016 dollars). Three Selected Agencies Purchased Almost 3,500 Pieces of Heavy Equipment in Calendar Years 2012 through 2016, but Did Not Consistently Document Lease- Versus-Purchase Analyses Air Force, FWS, and NPS Purchased Almost 3,500 Pieces of Heavy Equipment in Calendar Years 2012 through 2016; Limited Information Is Available on Leases In calendar years 2012 through 2016, the Air Force, FWS, and NPS purchased almost 3,500 pieces of heavy equipment through GSA and private vendors at a total cost of about $360 million to support mission needs. (See table 2.) These agencies also spent over $5 million on long- term leases and rentals during this time period. The Air Force spent over $300 million to purchase over 2,600 heavy equipment assets in calendar years 2012 through 2016 that were used to support and maintain its bases globally. For example, according to Air Force officials, heavy equipment is often used to maintain runways and service and reposition aircraft on runways. While the majority of Air Force heavy equipment purchased in this time period is located in the United States, 41 percent of this heavy equipment is located outside the United States and its territories in 17 foreign countries to support global military bases. The Air Force could not provide complete information on its heavy equipment leases for fiscal years 2012 through 2016. Specifically, the Air Force provided data on 33 commercial heavy equipment leases that were ongoing as of August 2017 but could not provide cost data for these leases because this information is not tracked centrally. Additionally, the Air Force could not provide any data on leases that occurred previously because, according to Air Force officials, lease records are removed from the Air Force database upon termination of the lease. Officials said that rentals are generally handled locally and obtaining complete data would require a data call to over 300 base contracting offices. Air Force officials stated that rentals are generally used in unique situations involving short- term needs such as responding to natural disasters. For example, following Hurricane Sandy, staff at Langley Air Force Base in Virginia used rental equipment to clean up and repair the base. Although Air Force did not provide complete information on rentals, data we obtained from GSA’s Short-Term Rental program indicated that Air Force rented heavy equipment in 46 transactions not reflected in the Air Force data we received totaling over $3.7 million since GSA began offering heavy equipment through its Short-Term Rental program, which had previously been limited to passenger vehicles, in part program in 2013. FWS spent over $32 million to purchase 348 heavy equipment assets from calendar years 2012 through 2016. FWS used its heavy equipment to maintain refuge areas throughout the United States and its territories, including maintaining roads and nature trails. FWS also used heavy equipment to respond to inclement weather and natural disasters. Most of the heavy equipment items purchased by FWS were in the construction, mining, excavating, and highway maintenance equipment category and include items such as excavators, which were used for moving soil, supplies, and other resources. FWS officials reported that they did not have any long-term leases for any heavy equipment in fiscal years 2012 through 2016 because they encourage equipment sharing and rentals to avoid long-term leases whenever possible. FWS officials provided data on 228 rentals for this time period with a total cost of over $1 million. Information regarding these rentals is contained in an Interior-wide property management system, the Financial Business Management System (FBMS). FWS officials told us that they have not rented heavy equipment through GSA’s program because they have found lower prices through local equipment rental companies. NPS spent over $27 million to purchase 471 heavy equipment assets from calendar years 2012 through 2016. NPS uses heavy equipment— located throughout the United States and its territories—to maintain national parks and respond to inclement weather and natural disasters. For example, NPS used heavy equipment such as dump trucks, snow plows, road graders, and wheel loaders to clear and salt the George Washington Memorial Parkway in Washington, D.C., following snow and ice storms. Most of the heavy equipment items purchased by NPS were in the construction, mining, excavating, and highway maintenance equipment category and include items such as excavators, which are used for moving soil, supplies, and other resources. NPS reported spending about $360,000 on 230 long-term leases and rentals in fiscal years 2012 through 2016, not including rentals through GSA’s Short-Term Rental program, which had previously been limited to passenger vehicles, in part program. As with FWS, NPS leases and rentals are contained in FBMS, which is Interior’s property management system. Data we obtained from GSA’s Short-Term Rental program, which had previously been limited to passenger vehicles, in part program indicated that NPS rented heavy equipment in 26 transactions totaling over $200,000 since GSA began offering heavy equipment through its Short-Term Rental program, which had previously been limited to passenger vehicles, in part program in 2013, for a potential total cost of over $560,000 for these long-term leases and rentals. Selected Agencies Did Not Consistently Conduct and Document Lease-versus- Purchase Analyses As mentioned earlier, the FAR provides that executive branch agencies seeking to acquire equipment should consider whether it is more economical to lease equipment rather than purchase it and identifies factors agencies should consider in this analysis, such as estimated length of the period that the equipment is to be used, the extent of use in that time period, and maintenance costs. This analysis is commonly referred to as a lease-versus-purchase analysis. While the FAR does not specifically require that agencies document their lease-versus-purchase analyses, according to federal internal control standards, management should clearly document all transactions and other significant events in a manner that allows the documentation to be readily available for examination and also communicate quality information to enable staff to complete their responsibilities. As discussed below, we found that most acquisitions we reviewed from FWS, NPS, and the Air Force did not contain any documentation of a lease-versus-purchase analysis. Specifically, officials were unable to provide documentation of a lease-versus-purchase analysis for six of the eight acquisitions we reviewed. FWS officials were able to provide documentation for the other two. Officials told us that a lease-versus- purchase analysis was not conducted for five of the six acquisitions and did not know if such analysis was conducted for the other acquisition. According to agency officials, the main reason why analyses were not conducted or documented for these six acquisitions is that the circumstances in which such analyses were to be performed or documented were not always clear to FWS, NPS, and Air Force officials. Interior In addition to the FAR, Interior has agency guidance stating that bureaus should conduct and document lease-versus-purchase analyses. This July 2013 guidance—that FWS and NPS are to follow—states that requesters of equipment valued at $15,000 or greater should perform a lease-versus- purchase analysis when requesting heavy equipment. According to the guidance, this analysis should address criteria in the FAR and include a discussion of the financial and operating advantages of alternate approaches that would help contracting officials determine the final appropriate acquisition method. At the time the guidance was issued, Interior also provided a lease-versus-purchase analysis tool to aid officials in conducting this analysis. Additionally, in April 2016, Interior issued a policy to implement the July 2013 guidance. The 2016 policy clarifies that program offices are required to complete Interior’s lease-versus-purchase analysis tool and provide the completed analysis to the relevant contracting officer. Within Interior, bureaus are responsible for ensuring that procurement requirements are met, including the requirements and directives outlined in Interior’s 2013 guidance and 2016 policy on lease-versus-purchase analyses, according to agency officials. Within FWS, local procurement specialists prepare procurement requests and ensure that procurement requirements are met and that all viable options have been considered. Regional equipment managers review these procurement requests, decide whether to purchase or lease the requested equipment, and prepare the lease-versus-purchase analysis tool if the procurement specialist has indicated that it is required. Within NPS, local procurement specialists are responsible for ensuring that all procurements adhere to relevant requirements and directives, including documenting the lease- versus-purchase analysis. Of the three FWS heavy equipment acquisitions we reviewed for which the 2013 Interior guidance was applicable, one included a completed lease-versus-purchase analysis tool; one documented the rationale for purchasing rather than leasing, although it did not include Interior’s lease- versus-purchase analysis tool; and one did not include any documentation related to a lease-versus-purchase analysis. (See table 3.) Regarding the acquisition for which no documentation of a lease-versus- purchase analysis was provided—a 12-month lease of an excavator and associated labor costs for over $19,000—FWS officials initially told us that a lease-versus-purchase analysis was not required because the equipment lease was less than $15,000, and Interior’s guidance required a lease-versus-purchase analysis for procurements of equipment valued at $15,000 or greater. However, we found the guidance did not specify whether the $15,000 threshold includes the cost of labor. We also found that Interior’s guidance did not specify if a lease-versus-purchase analysis was required if the total cost of a rental is less than the purchase price. FWS officials acknowledged that Interior guidance is not clear and that it would be helpful for Interior to clarify whether these leases require a lease-versus-purchase analysis. NPS officials were unable to provide documentation of a lease-versus- purchase analysis for the single heavy equipment acquisition we reviewed—the purchase of a wheeled tractor in 2015 for $43,177. According to these officials, they could not do so because of personnel turnover in the contracting office that would have documented the analysis. In addition, they told us that they believe that such analyses are not always completed for heavy equipment acquisitions because responsibility for completing these analyses is unclear. Specifically, they told us that it was unclear whether the responsibility lies with the official requesting the equipment, the contracting personnel who facilitate the acquisition, or the property personnel who manage inventory data. However, when we discussed our findings with Interior and NPS officials, NPS officials were made aware of the 2016 Interior policy that specifically requires program offices—the officials requesting the equipment—to complete the lease-versus-purchase analysis and provide documentation of this analysis to the contracting officer. As a result, NPS officials told us at the end of our review that program office officials will now be required to complete the lease-versus-purchase analysis tool and document this analysis. Air Force According to Air Force officials responsible for managing heavy equipment, financial or budget personnel at individual bases are responsible for conducting lease-versus-purchase analyses, also called economic analyses, to support purchase and lease requests. Air Force fleet officials told us that they then review these requests from a fleet perspective, considering factors such as whether the cost information provided in the request is from a reputable source, expected maintenance costs, and whether a requesting base has the capability to maintain the requested equipment. However, they said they do not check to ensure that a lease-versus-purchase analysis was completed or review the analysis. Equipment rentals can be approved at individual bases. In our review of four Air Force heavy equipment acquisitions, we found no instances in which Air Force officials documented a lease-versus- purchase analysis (see table 4). For the acquisitions that we reviewed, Air Force officials told us they did not believe a lease-versus-purchase analysis was required because the new equipment was either replacing old equipment that was previously approved or could be deployed. Accordingly, the Air Force purchased two forklifts in 2013 without conducting lease-versus-purchase analyses because the forklifts were replacing old forklifts that were authorized in 1997 and 2005. Furthermore, Air Force officials told us that both of these forklifts could be deployed and indicated that lease-versus-purchase analyses are not required for deployable equipment. However, the Air Force does not have guidance that describes the circumstances that require either a lease-versus-purchase analysis or documentation of the rationale for not completing such analysis. Although we identified several instances in which officials in the three selected agencies did not document lease-versus-purchase analyses, officials from these agencies stated that they consider mission needs and equipment availability, among other factors, when making these decisions. For example, Air Force officials told us following Hurricane Sandy, staff at Langley Air Force Base in Virginia used rental equipment to clean and repair the base because the equipment was needed immediately to ensure the base could meet its mission. Moreover, availability of heavy equipment for lease or rental, which can be affected by factors such as geography and competition for equipment, is a key consideration. For example, FWS officials told us that the specialized heavy equipment sometimes needed may not be available for long-term lease or rent in remote areas such as Alaska and the Midway Islands, so the agency purchases the equipment. In addition, some agency officials told us that they may purchase heavy equipment even if that equipment is needed only sporadically if there is likely to be high demand for rental equipment. For example, following inclement weather or a natural disaster, demand for certain heavy equipment rentals can be high and equipment may not be available to rent when it is needed. While we recognize that mission needs and other factors are important considerations, without greater clarity regarding when to conduct or document lease-versus-purchase analyses, officials at FWS, NPS, and Air Force may not be conducting such analyses when appropriate and may not always make the best acquisition decisions. These agencies could be overspending on leased equipment that would be more cost- effective if purchased or overspending to purchase equipment when it would be more cost-effective to lease or rent. Moreover, without documenting decisions on whether to purchase or lease equipment, they lack information that could be used to inform future acquisition decisions for similar types of equipment or projects. Air Force and FWS Periodically Assess Heavy Equipment Utilization; NPS Does Not But Is Developing Guidance to Do So Air Force guidance requires that fleet managers collect utilization data for both vehicles and heavy equipment items, such as the number of hours used, miles traveled, and maintenance costs. The Air Force provided us with utilization data for over 18,000 heavy equipment items and uses such data to inform periodic base validations. Specifically, Air Force officials said that every 3 to 5 years each Air Force base reviews the on- base equipment to ensure that the installation has the appropriate heavy equipment to complete its mission and reviews utilization data to identify items that are underutilized. If heavy equipment is considered underutilized, the equipment is relocated—either moved to another location or sent to the Defense Logistics Agency for reuse or transfer to another agency. According to Air Force officials the Air Force has relocated over 700 heavy equipment items based on the results of the validation process and other factors such as replacing older items and agency needs since 2014. Similarly, FWS guidance for managing heavy equipment utilization sets forth minimum utilization hours for certain types of heavy equipment and describes requirements for reporting utilization data. FWS provided us with utilization data on over 3,000 heavy equipment items. According to officials, condition assessments of heavy equipment are required by FWS guidance every 3 to 5 years. According to FWS officials, condition assessments inform regional-level decision making about whether to move equipment to another FWS location or dispose of the equipment. In contrast, NPS does not require the collection of utilization data to evaluate heavy equipment use and does not have guidance for managing heavy equipment utilization. However, NPS officials told us that they recognize the need for such guidance. NPS officials shared with us draft guidance that they have developed, which would require collection of utilization data for heavy equipment such as hours or days of usage each month. According to NPS officials, they plan to send the guidance to the NPS policy office for final review in March 2018. Until this guidance is completed and published, NPS is taking interim actions to manage the utilization of its heavy equipment. For example, NPS officials stated that they have asked NPS locations to collect and post monthly utilization data, discussed the collection of utilization data at fleet meetings, and distributed job aids to support this effort. During the course of our review, NPS officials provided us with some utilization data for about 1,400 of the more than 2,400 NPS heavy equipment items. Specifically, for the 1,459 heavy equipment items for which NPS provided utilization data, 541 items had utilization data for each month. For the remaining 918 items, utilization data were reported for some, but not all months. Conclusions The federal government has spent billions of dollars to acquire heavy equipment. There is no requirement that agencies report on the inventory of this equipment, as there is no standard definition of heavy equipment. When deciding how to acquire this equipment, agencies’ should conduct a lease-versus-purchase analysis as provided in the FAR, which is a critical mechanism to ensure agencies are acquiring the equipment in the most cost-effective manner. Because FWS, NPS and the Air Force were unclear when such an analysis was required, they did not consistently conduct or document analyses of whether it was more economical to purchase or lease heavy equipment. In the absence of clarity on the circumstances in which lease-versus-purchase analyses for heavy equipment acquisitions are to be conducted and documented, the agencies may not be spending funds on heavy equipment cost- effectively. Recommendations for Executive Action We are making two recommendations—one to the Air Force and one to the Department of the Interior. The Secretary of the Air Force should develop guidance to clarify the circumstances in which lease-versus-purchase analyses for heavy equipment acquisitions are to be conducted and documented. (Recommendation 1) The Secretary of the Interior should further clarify in guidance the circumstances in which lease-versus-purchase analyses for heavy equipment acquisitions are to be conducted and documented. (Recommendation 2) Agency Comments We provided a draft of this report to the Departments of Agriculture, Defense, Energy, Homeland Security, Housing and Urban Development, the Interior, Justice, Labor, State, and Veterans Affairs; General Services Administration; National Aeronautics and Space Administration; National Science Foundation; Nuclear Regulatory Commission; Office of Personnel Management; and U.S. Agency for International Development. The departments of Agriculture, Energy, Homeland Security, Housing and Urban Development, Justice, State and Veterans Affairs, as well as the General Services Administration, National Aeronautics and Space Administration, National Science Foundation, Nuclear Regulatory Commission, Office of Personnel Management; and U.S. Agency for International Development did not have comments. The Department of Labor provided technical comments, which we incorporated as appropriate. In written comments, reproduced in appendix III, the Department of Defense stated that it concurred with our recommendation and plans to issue a bulletin to Air Force contracting officials. In written comments, reproduced in appendix IV, the Department of the Interior stated that it concurred with our recommendation and plans to implement it. If you or members of your staff have any questions about this report, please contact me at (202) 512-2834 or RectanusL@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix V. Appendix I: Table of 20 Agencies’ Heavy Equipment Inventories by Category, as of June 2017 Agency Department of Agriculture Specialized Trucks and Trailers 37 . Self-Propelled Warehouse Trucks and Tractors 1,733 3 . . . . . . . Department of Commerce . . . . . U.S. Census Bureau Department of Defense . . . . . . . Agency Department of Energy Specialized Trucks and Trailers 7 . Self-Propelled Warehouse Trucks and Tractors 2,925 134 . . . . . . . . . . Department of Health and Human Services . . . . . . Department of Homeland Security . . Agency Specialized Trucks and Trailers . Self-Propelled Warehouse Trucks and Tractors 146 . . . . Department of Housing and Urban Development Department of the Interior Bureau of Indian Affairs . . 7 . . . Department of Justice - 109 . . . Agency Department of State Department of Transportation Self-Propelled Warehouse Trucks and Tractors 575 64 40 . . . . . Department of Veterans Affairs . . . . Environmental Protection Agency National Aeronautics and Space Administration National Science Foundation . . 4 . . . . . . . . . Agency Nuclear Regulatory Commission Office of Personnel Management Social Security Administration United States Agency for International Development Grand Total . . . . Appendix II: Objectives, Scope, and Methodology This report addresses: (1) the number, type, and cost of heavy equipment items that are owned by the 24 CFO Act agencies; (2) the heavy equipment items selected agencies have recently acquired and how selected agencies decided to purchase or lease this equipment; and (3) how selected agencies manage the utilization of their heavy equipment. To identify the number, type, and cost of heavy equipment owned by federal agencies, we first interviewed officials at the General Services Administration to determine whether there were government-wide reporting requirements for owned heavy equipment and learned that there are no such requirements. We then obtained and analyzed data on agencies’ spending on equipment purchases and leases from the Federal Procurement Data System–Next Generation (FPDS-NG), which contains government-wide data on agencies’ contracts. However, in reviewing the data available and identifying issues with the reliability of the data, we determined that data on contracts would not be sufficient to answer the question of what heavy equipment the 24 CFO Act agencies own. We therefore conducted a data collection effort to obtain heavy equipment inventory information from the 24 CFO Act agencies, which are the Departments of Agriculture, Commerce, Defense, Education, Energy, Health and Human Services, Homeland Security, Housing and Urban Development, the Interior, Justice, Labor, State, Transportation, the Treasury, and Veterans Affairs; Environmental Protection Agency; General Services Administration; National Aeronautics and Space Administration; National Science Foundation; Nuclear Regulatory Commission; Office of Personnel Management; Small Business Administration; Social Security Administration; and Agency for International Development. Because there is no generally accepted definition of heavy equipment, we identified 12 federal supply classes in which the majority of items are self- propelled equipment but not passenger vehicles or items that are specific to combat and tactical purposes, as these items are generally not considered to be heavy equipment. (See table 5.) We then vetted the appropriateness of these selected supply classes with Interior, FWS, NPS, and Air Force agency officials, as well as with representatives from a fleet management consultancy and a rental company, and they generally agreed that items in selected federal supply classes are considered heavy equipment. Federal supply classes are used in FPDS- NG and are widely used in agencies’ inventory systems. Overall, about 90 percent of the heavy equipment items that agencies reported were assigned a federal supply class in the agency’s inventory data. In discussing heavy equipment categories in the report, we use the category titles below. To identify points of contact at the 24 CFO Act agencies, we obtained GSA’s list of contact information for agencies’ national utilization officers, who are agency property officers who coordinate with GSA. As a preliminary step, we contacted these individuals at each of the 24 CFO Act agencies and asked them to either confirm that they were the appropriate contacts or provide contact information for the appropriate contact and to inform us if they do not own heavy equipment. Officials at 4 agencies—Department of Education, Department of the Treasury, General Services Administration, and Small Business Administration— indicated that the agency did not own any items in the relevant federal supply classes. Officials at 16 of these agencies indicated that they would be able to respond on a departmental level because the relevant inventory data are maintained centrally, while officials at 4 agencies indicated that we would need to obtain responses from officials at some other level because the relevant inventory data are not maintained centrally. (See table 7 for a list of organizations within the 20 CFO Act agencies that indicated they own relevant equipment and responded to our data collection effort.) After identifying contacts responsible for agencies’ heavy-equipment inventory data, we prepared data collection instruments for requesting information on heavy equipment and tested these documents with representatives from 4 of the 20 CFO Act agencies that indicated they own heavy equipment to ensure that the documents were clear and logical and that respondents would be able to provide the requested data and answer the questions without undue burden. These agency representatives were selected to provide a variety of spending on federal supply group 38 equipment as reported in FPDS-NG, civilian and military agencies, and different levels at which the agency would be responding to the data collection effort (e.g., at the departmental level or at a sub- departmental level). Our data collection instrument requested the following data on respondent organizations’ owned assets in 12 federal supply classes as of June 2017: Respondents provided data on original acquisition costs in nominal terms, with some acquisitions occurring over 50 years ago. In order to provide a fixed point of reference for appropriate comparison, we present in our report inflation-adjusted acquisition costs using calendar year 2016 as the reference. To adjust these dollar amounts for inflation, we used the Bureau of Labor Statistic’s Producer Price Index by Commodity for Machinery and Equipment: Construction Machinery and Equipment (WPU112), compiled by the Federal Reserve Bank of St. Louis. We conducted the data collection effort from July 2017 through October 2017 and received responses from all 20 agencies that indicated they own heavy equipment. In order to assess the reliability of agencies’ reported data, we collected and reviewed agencies’ responses regarding descriptions of their inventory systems, frequency of data entry, agency uses of the data, and agencies’ opinions on potential limitations of the use of their data in our analysis. We conducted some data cleaning, which included examining the data for obvious errors and eliminating outliers. We did not verify the data or responses received; the results of our data collection effort are used only for descriptive purposes and are not generalizable beyond the 24 CFO Act agencies. Based on the steps we took, we found these data to be sufficiently reliable for our purposes. To determine the heavy equipment items that selected agencies recently acquired and how these agencies decided whether to purchase or lease this equipment, we first used data from the FPDS-NG to identify agencies that appeared to have the highest obligations for construction or heavy equipment, or both, and used this information, along with other factors, to select DOD and Interior. At the time, in the absence of a generally accepted definition of heavy equipment, we reviewed data related to federal supply group 38—construction, mining, excavating, and highway maintenance equipment—because (1) we had not yet defined heavy equipment for the purposes of our review; (2) agency officials had told us that most of what could be considered heavy equipment was in this federal supply group; and (3) our analysis of data from usaspending.gov showed that about 80 percent of spending on items that may be considered heavy equipment were in this federal supply group. In meeting with officials at these departments, we learned that agencies within each department manage heavy equipment independently, so we requested current inventory data for Interior bureaus and the DOD military departments and selected three agencies that had among the largest inventories of construction and/or heavy equipment at the time, among other criteria: the U.S. Air Force (Air Force); the Fish and Wildlife Service (FWS); and the National Park Service (NPS). We then used information from our data collection effort—which included the number, type, cost, acquisition year and other data elements—to determine heavy equipment items that these agencies acquired during 2012 through 2016. We interviewed agency officials to determine what lease data were available from the three selected agencies. We assessed the reliability of these data with agency official interviews and reviewed the data for completeness and potential outliers. We determined that the data provided were sufficiently reliable for the purposes of documenting leased and rental heavy equipment. We also obtained data from GSA’s Short- Term Rental program, which had previously been limited to passenger vehicles, in part program for August 2012, when the first item was rented under this program, to February 2017, when GSA provided the data. We used these data to identify selected agencies’ rentals of heavy equipment through GSA’s Short-Term Rental program, which had previously been limited to passenger vehicles, in part program and associated costs. We interviewed officials from GSA’s Short-Term Rental program to discuss the program history as well as the reliability of their data on these rented heavy equipment items. We determined that the data were sufficiently reliable for our purposes. To determine how the three selected agencies decide whether to purchase or lease heavy equipment, we interviewed fleet and property managers at these selected agencies and asked them to describe their process for making these decisions as well as to identify relevant federal and agency regulations and guidance. We reviewed relevant federal and agency regulations and guidance regarding how agencies should make these decisions, including: Federal Acquisition Regulation, Office of Management Budget’s A-94, Guidelines and Discount Rates for Benefit- Cost Analysis of Federal Programs, Defense Federal Acquisition Regulation Supplement, Air Force Manual 65-506, Air Force Guidance Memorandum to Air Force Instruction 65-501, and Interior’s Guidance On Lease Versus Purchase Analysis and Capital Lease Determination for Equipment Leases. We also reviewed the Standards for Internal Control in the Federal Government for guidance on documentation as well as past GAO work that reviewed agencies’ lease-versus-purchase analyses. To determine whether the three selected federal agencies documented lease-versus-purchase decisions for selected acquisitions and adhered to relevant agency guidance, we selected and reviewed a non-generalizable sample of 10 heavy equipment acquisitions—two purchases each from the Air Force, FWS, and NPS, and two leases each from the Air Force and FWS. Specifically, we used inventory data obtained through our data collection effort, described above, to randomly select two heavy equipment purchases from each selected agency using the following criteria: calendar years 2012 through 2016; the two federal supply classes most prevalent in each selected agency’s heavy equipment inventory, as determined by the data collection effort described above; and for NPS and FWS, acquisition costs of over $15,000. In addition, we used lease data provided by the Air Force and FWS to randomly selected two heavy equipment leases per agency. Because NPS could not provide data on heavy equipment leases, we did not select or review any NPS lease decisions. To select the Air Force and FWS leases we used the following criteria: fiscal years 2012 through 2016; for the Air Force, which included federal supply classes in the lease data provided, the two federal supply classes most prevalent in the lease data and for FWS, which did not include federal supply class in the lease data provided, the two federal supply classes most prevalent in the purchase data; and for FWS, leases over $15,000. After selecting these acquisitions, we determined that one FWS lease and one NPS purchase we selected pre-dated Interior’s 2013 guidance on lease-versus-purchase analysis and excluded these acquisitions from our analysis for a total of eight acquisitions. In reviewing agencies’ documentation related to these acquisitions, we developed a data collection instrument to assess the extent to which agencies documented lease-versus-purchase analyses and, in the case of FWS and NPS, adhered to relevant Interior guidance. We supplemented our review of these acquisition decisions with additional information by interviewing officials at the three selected agencies and requesting additional information to understand specific circumstances surrounding each procurement. Our findings are not generalizable across the federal government or within each selected department. To determine how selected agencies manage heavy equipment utilization, we interviewed officials at the three selected agencies to identify departmental and agency-specific guidance and policies and to determine whether utilization requirements exist. We reviewed guidance identified by these officials, including Interior and Air Force vehicle guidance, both of which apply to heavy equipment, and FWS’s Heavy Equipment Utilization and Replacement Handbook. We also compared their practices to relevant Standards for Internal Control in the Federal Government. For the selected agencies with guidance for managing heavy equipment—Air Force and FWS—we reviewed the guidance to determine if and how selected agencies measured and documented heavy equipment utilization. For example, we reviewed whether selected agencies developed reports for managing heavy equipment utilization such as Air Force validation reports and FWS conditional assessment reports. We also reviewed Air Force, FWS, and NPS utilization data for heavy equipment but we did not independently calculate or verify the utilization rate for individual heavy equipment items because each heavy equipment item (backhoe, forklift, tractor, etc.,) has different utilization requirements depending on various factors such as the brand, model, or age of equipment. However, we did request information about agency procedures to develop and verify utilization rates. We assessed the reliability of the utilization data with agency official interviews and a review of the data for completeness and potential outliers. We determined that the data were sufficiently reliable for the purposes of providing evidence of utilization data collection for heavy equipment assets. We also visited the NPS George Washington Memorial Parkway to interview equipment maintenance officials regarding the procurement and management of heavy equipment and to document photos of heavy equipment. We selected this site because of its range of heavy equipment and close proximity to the Capital region. We conducted this performance audit from October 2016 to February 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: Comments from the Department of Defense Appendix IV: Comments from the Department of Interior Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, John W. Shumann (Assistant Director), Rebecca Rygg (Analyst in Charge), Nelsie Alcoser, Melissa Bodeau, Terence Lam, Ying Long, Josh Ormond, Kelly Rubin, Crystal Wesco, and Elizabeth Wood made key contributions to this report.
Why GAO Did This Study Federal agencies use heavy equipment such as cranes and forklifts to carry out their missions, but there is no government-wide data on federal agencies' acquisition or management of this equipment. GAO was asked to review federal agencies' management of heavy equipment. This report, among other objectives, examines: (1) the number, type, and costs of heavy equipment items that are owned by 20 federal agencies and (2) the heavy equipment that selected agencies recently acquired as well as how they decided whether to purchase or lease this equipment. GAO collected heavy equipment inventory data as of June 2017 from the 24 agencies that have chief financial officers responsible for overseeing financial management. GAO also selected three agencies (using factors such as the heavy equipment fleet's size) and reviewed their acquisitions of and guidance on heavy equipment. These agencies' practices are not generalizable to all acquisitions but provide insight into what efforts these agencies take to acquire thousands of heavy equipment items. GAO also interviewed officials at the three selected agencies. What GAO Found Of the 24 agencies GAO reviewed, 20 reported owning over 136,000 heavy equipment items such as cranes, backhoes, and forklifts, and spending over $7.4 billion (in 2016 dollars) to acquire this equipment. The remaining 4 agencies reported that they do not own any heavy equipment. The three selected agencies GAO reviewed in-depth—the Air Force within the Department of Defense (DOD), and the Fish and Wildlife Service and the National Park Service within the Department of the Interior (Interior)—spent about $360 million to purchase about 3,500 heavy equipment assets in calendar years 2012 through 2016 and over $5 million to lease heavy equipment from fiscal years 2012 through 2016. Officials from all three agencies stated that they consider mission needs and the availability of equipment leases when deciding whether to lease or purchase heavy equipment. Federal regulations provide that agencies should consider whether it is more economical to lease or purchase equipment when acquiring heavy equipment, and federal internal control standards require that management clearly document all transactions in a manner that allows the documentation to be readily available for examination. However, in reviewing selected leases and purchases of heavy equipment from these three agencies, GAO found that officials did not consistently conduct or document lease-versus-purchase analyses. Officials at the Air Force and Interior said that there was a lack of clarity in agency policies about when they were required to conduct and document such analyses. Without greater clarity on when lease-versus-purchase analyses should be conducted and documented, these agencies may not be spending funds on heavy equipment effectively. What GAO Recommends The Department of the Interior and the Air Force should clarify the circumstances in which lease-versus-purchases analyses for heavy equipment acquisitions are to be conducted and documented. The Departments of the Interior and Defense concurred with these recommendations.
gao_GAO-18-221
gao_GAO-18-221_0
Background Establishing Foreign Currency Budget Rates As part of the annual budget formulation process for each fiscal year, DOD establishes for each of nine foreign currencies, a foreign currency budget rate (units of foreign currency per one United States (U.S.) Dollar) to use when developing O&M and MILPERS funding requirements for overseas expenditures. Foreign currency budget rates for a particular fiscal year are established approximately 18 months prior to the fiscal year when overseas obligations will be incurred and disbursements made. For example, in June 2015, OUSD(C) issued guidance to, in part, instruct the services on the foreign currency rates to use in building their fiscal year 2017 budgets. In February 2016, as part of the President’s budget, DOD submitted its proposed fiscal year 2017 budget to Congress, and it began incurring obligations against subsequently appropriated amounts on October 1, 2016. DOD has used various methodologies for establishing the foreign currency budget rates. In 2005, we reviewed DOD’s methodology for developing its foreign currency budget rates and reported that DOD’s approach for estimating its foreign currency requirements for the fiscal year 2006 budget was a reasonable approach for forecasting foreign currency rates that could produce a more realistic estimate than its historical approach. In its fiscal year 2006 through 2016 budget requests, DOD used a centered weighted average model that combined both a 5-year average of exchange rates and an average of the most recently observed 12 months of exchange rates. For its fiscal year 2017 request, DOD adjusted its methodology to establish the foreign currency budget rates. Specifically, DOD established its foreign currency rates by calculating a 6-month average of Wall Street Journal rates published every Monday from May 25, 2015, to November 16, 2015. According to an OUSD(C) official, the 6-month average more closely represented foreign currency exchange rates experienced by the department during budget formulation, and it accounted for the strength of the U.S. Dollar, which had increased as compared with its historical 5- year average. DOD’s analysis found that the use of the 5-year historical average would have resulted in substantial gains when compared with gains expected from application of the 6-month average. More specifically, DOD projected gains of about $1 billion using the 5-year average of rates. Obligating and Disbursing Amounts Using Foreign Currency Rates During the fiscal year for which a budget is developed, DOD incurs obligations for its overseas O&M and MILPERS activities. Those obligations are recorded using the foreign currency budget rates. DOD uses various methods for selecting foreign currency rates to liquidate those obligations through disbursements, which may differ from the budget rates. DOD’s preferred payment method for foreign currency transactions is the Department of Treasury’s (Treasury) comprehensive international payment and collection system—the International Treasury Services (ITS.gov) system—which serves federal agencies making payments in nearly 200 countries. ITS.gov offers a number of rates, including advanced rates available up to 5 days in advance of disbursement, and the spot rate. The spot rate is the price for foreign currencies for delivery in 2 business days. While advanced rates, like spot rates, are based on the current market rate, advanced rates at the time they are selected are generally higher than the spot rate, with the 5-day advanced rate being the highest, because the rates are locked in ahead of the actual value date. While the spot rate can be more cost-effective, it requires immediate transaction processing, which may not be feasible for all disbursements. Differences between obligations incurred at the foreign currency budget rates and the amounts that DOD actually disburses drive gains or losses in the appropriated amounts DOD has available for its planned overseas expenditures. For example, if DOD budgeted for the U.K. Pound at a rate of .6289 (that is, 1 U.S. Dollar buys .6289 U.K. Pounds) as it did in fiscal year 2016, and the rate experienced at the time of disbursement was .6845, then DOD would have requested more funds than were actually needed for transactions involving the U.K. Pound. That would have resulted in a gain from the transaction—meaning that DOD would need less funds than were budgeted for the transaction. Conversely, a current rate that is lower than what was budgeted will result in a loss—and DOD would require more funds than were budgeted for the transaction. Foreign Currency Accounts Within each of the services’ O&M and MILPERS appropriations accounts, amounts are available for overseas activities. Amounts obligated for overseas activities, along with associated foreign currency gains and losses, are managed by the services as part of the overall management of their O&M and MILPERS appropriations accounts. Service components use foreign currency fluctuation accounts within their O&M and MILPERs appropriations to manage realized gains and losses in direct programs due to fluctuations in foreign exchange rates. The service-level foreign currency fluctuation accounts are maintained at various budgetary levels within the service components. In fiscal year 1979, Congress appropriated $500 million to establish the FCFD account for purposes of maintaining the budgeted level of operations in the MILPERS and O&M appropriation accounts by mitigating substantial gains or losses to those appropriations caused by foreign currency rate fluctuations. FCFD appropriations are different from the O&M and MILPERS appropriations in two ways. First, FCFD account amounts are no-year amounts, meaning that they are available until expended, while in general, O&M and MILPERS appropriations are 1-year amounts and expire at the end of the fiscal year for which they were appropriated. Expired O&M and MILPERS amounts remain available only for limited purposes for 5 additional fiscal years. At the end of the 5-year expired period, any remaining O&M or MILPERS amounts, obligated or unobligated, are canceled and returned to Treasury. Second, FCFD account amounts may be used only to pay obligations incurred because of fluctuations in currency exchange rates of foreign countries, while O&M amounts are available for diverse expenses necessary for the operation and maintenance of the services and MILPERS amounts are available for service personnel-related expenses, such as pay, permanent changes of station travel, and expenses of temporary duty travel, among other purposes. Amounts from the FCFD account may be transferred to service-level foreign currency fluctuation accounts within O&M and MILPERS appropriation accounts to offset losses in buying power due to unfavorable differences between the budget rate and the foreign currency exchange rate prevailing at the time of disbursement. The FCFD account may be replenished in several ways. Amounts transferred from the FCFD to O&M and MILPERS appropriations may be returned when not needed to liquidate obligations because of subsequent favorable foreign currency rates in relation to the budget rate, or because other amounts have become available to cover obligations. A transfer back to the FCFD of unneeded amounts must be made before the end of the second fiscal year of expiration following the fiscal year of availability of the O&M or MILPERS appropriation to which the funds were originally transferred. Amounts may also be transferred to the FCFD account even if they did not originate there. Specifically, DOD may transfer to the FCFD account any unobligated O&M and MILPERS amounts unrelated to foreign currency exchange fluctuations so long as the transfers are made not later than the end of the second fiscal year of expiration of the appropriation. While multiple transfers of these unobligated amounts may be made during a fiscal year, any such transfer is limited so that the amount in the FCFD account does not exceed the statutory maximum of $970 million at the time of transfer. When the FCFD account balance is at the maximum balance, the services normally retain in their service- level O&M and MILPERs foreign currency fluctuation accounts any gains resulting from favorable foreign currency rates. Finally, any amounts transferred, whether from the FCFD account to an O&M or MILPERS account, or from an O&M or MILPERS account to the FCFD, are merged with the account and assume the characteristics of that account, including the period of availability of the funds contained in the account. Visibility of service-level foreign currency fluctuation account and FCFD transactions is maintained through the services’ accounting systems and execution reports. DOD uses the following reports to track its foreign currency funds: Foreign Currency Fluctuations, Defense Report (O&M): provides data on O&M foreign currency gains and losses for each service, by currency, including data on projected gains or losses for any remaining obligations that have not yet been liquidated and disbursed at the time of the report. Foreign Currency Fluctuation, Defense Report (MILPERS): provides data on MILPERS foreign currency gains and losses for each service, by currency, including data on projected gains or losses for any remaining obligations that have not yet been disbursed at the time of the report. GAO’s Prior Work on Management of Federal Funds In 2013 we analyzed and reported on carryover balances in federal accounts, which amounted to $2.2 trillion in fiscal year 2012, and we found that greater examination of carryover balances by an agency provides opportunities for enhanced oversight of their management of federal funds and may help identify opportunities for potential budgetary savings. Carryover balances are composed of both obligated and unobligated amounts. Only accounts with multi-year or no-year amounts, such as the FCFD, may carry over amounts that remain legally available for new obligations from one fiscal year to the next. DOD’s carryover balances would include FCFD account balances carried from one year to the next. DOD’s FCFD account is composed of unobligated carryover amounts that accumulate when unneeded for transfer to O&M and MILPERS accounts to cover foreign currency fluctuations. FCFD unobligated carryover balances include any expired, unobligated balances from the military services’ O&M and MILPERS accounts, which can include any gains due to favorable foreign currency fluctuations that are not used to cover other losses and that are transferred into the FCFD. DOD Revised Its Foreign Currency Budget Rates in Fiscal Years 2014 through 2016, Decreasing Its Projected Funding Needs and Potential Foreign Currency Gains and Losses DOD revised its foreign currency budget rates in fiscal years 2014 through 2016, which resulted in budget rates in these years that were more closely aligned with rates published by Treasury. Furthermore, the revised budget rates in fiscal years 2014 through 2016 decreased DOD’s projected O&M and MILPERS funding needs. The revised budget rates also decreased potential gains and losses in the amount of funds that DOD had available for its planned overseas expenditures. DOD Revised Its Foreign Currency Budget Rates in Fiscal Years 2014 through 2016, Resulting in Rates More Closely Aligned with Treasury Rates DOD revised its foreign currency budget rates in fiscal year 2014 and continued to do so in fiscal years 2015 and 2016 before making adjustments to its methodology in fiscal year 2017. According to an OUSD(C) official, the methodology developed in 2017 resulted in budget rates that were more closely aligned with market rates than in previous years, making revision of the 2017 budget rates unnecessary. DOD’s revisions to its foreign currency budget rates in fiscal years 2014 through 2016 resulted in rates that more closely aligned with those published by Treasury. Further, they decreased the expected gains that would have otherwise resulted from a substantial increase in the strength of the U.S. Dollar, in fiscal years 2014 through 2016, relative to other foreign currencies from the time the budget rates were set as compared with the rates available once the fiscal year began. Prior to fiscal year 2014, DOD did not revise its foreign currency budget rates. DOD officials did not provide an explanation for why the budget rates for fiscal years 2009 through 2013 were not revised. DOD developed, in November 2015, a set of standard operating procedures that describe the methodology it used for formulating budget rates for the nine foreign currencies included in its budget submission. These procedures also state that DOD is required to update the budget rates once an appropriation is enacted for the fiscal year. For example, if Congress reduces DOD’s appropriations due to favorable foreign currency rates, such as the $1.5 billion reduction in DOD’s total fiscal year 2016 appropriations, OUSD(C) then revises the budget rates to absorb the reduced funding levels. OUSD(C) officials stated that other factors are also considered when determining whether to revise the foreign currency budget rates, and that the department communicates the revised budget rates to the DOD components and Congress. For example, OUSD(C) assesses the value of each of the nine foreign currencies used to develop the budget request relative to the strength of the U.S. Dollar during the fiscal year. An OUSD(C) official also noted that the effects that the rate changes would have across these foreign currencies are also considered prior to submitting recommended rate revisions to the OUSD(C) leadership for approval. The official stated that one currency may be experiencing a loss, while another is experiencing a gain, which can affect whether to revise the rates and what those revisions should be. Additionally, the OUSD(C) official stated that “significant” projected gains or losses could drive a revision to the foreign currency budget rates, and that an informal $10 million threshold for projected gains and losses is used to determine when the foreign currency budget rates are revised. According to OUSD(C) officials, DOD components and Congress were notified when the budget rates were revised during fiscal years 2014 through 2016, including an explanation for why the rates were revised. OUSD(C) also includes the budget rates for each of the nine foreign currencies on its website and identifies any instances in which the budget rates were revised with the effective date of any rate revisions. Our analysis of DOD’s use of revised budget rates during fiscal years 2014 through 2016 found that the revised budget rates for those years were more closely aligned with rates published by Treasury. More specifically, for the nine foreign currencies included in DOD’s budget, our analysis comparing DOD’s initial and revised budget rates for fiscal years 2009 through 2017 with average Treasury rates for these years found that DOD’s budget rates differed from Treasury rates by less than 10 percent in about 64 percent of the total 162 occurrences we examined. While we are unaware of any criteria that suggest how closely DOD’s foreign currency budget rates should align with market rates, we used 10 percent as a basis for our analysis because Treasury’s guidance states that amendments to its published exchange rates are required if rates differ from current rates by 10 percent or more. We further examined these occurrences to determine what the differences were between the DOD and Treasury rates before and after DOD began revising its budget rates in fiscal year 2014. Of the 162 occurrences we reviewed, there were 90 occurrences included in our comparison for fiscal years 2009 through 2013, and 72 occurrences were included in our comparison for fiscal years 2014 through 2017. Our analysis shows the following: For fiscal years 2014 through 2017, DOD’s budget rates for its nine foreign currencies differed from Treasury rates by less than 10 percent in about 71 percent of the occurrences. This increased from about 59 percent of the occurrences for the period of fiscal years 2009 through 2013, before DOD began revising its rates after the fiscal year began. For fiscal years 2014 through 2017, DOD’s budget rates differed from Treasury’s rates by 10 percent or more after DOD began revising its rates in fiscal year 2014 in about 29 percent of the occurrences, which is a decrease from about 41 percent of the occurrences prior to fiscal year 2014. Figure 2 below shows the number of occurrences in which DOD’s initial and revised rates differed from Treasury rates by less than 10 percent, and the occurrences in which DOD’s rates differed from Treasury rates by 10 percent or more. The occurrences that are less than 10 percent of Treasury rates are most closely aligned with Treasury rates. According to DOD officials, the differences between DOD’s foreign currency budget rates and Treasury rates are driven primarily by market volatility (that is, the differences in the foreign currency rates from when DOD formulates its budget rates, prior to the fiscal year, and the foreign currency rates determined by Treasury when obligated amounts are liquidated through disbursements during the fiscal year). According to the OUSD(C) official responsible for formulating and revising the foreign currency budget rates, the delay that occurs between the time when a budget rate is set (approximately 18 months prior to the beginning of a particular fiscal year) and the actual fiscal year is a major factor for why the budget rate may be revised. According to the official, the market rates experienced during fiscal years 2014 through 2016 were substantially different from those expected when the budget rates for those years were developed. Therefore, DOD revised its budget rates during these years to more closely align with market rates experienced. Specifically, this official stated that DOD revised its budget rates during fiscal years 2014 through 2016 to decrease the expected gains that would have otherwise resulted during these fiscal years from a substantial increase in the strength of the U.S. Dollar relative to other foreign currencies from the time the budget rates were set as compared with more favorable rates available once the fiscal year began. In order to more closely align its budget rates with market rates, DOD introduced a new methodology to establish the foreign currency budget rates for fiscal year 2017 because DOD anticipated approximately $1 billion in projected gains if it used the prior methodology. As a result of this change in the methodology, according to the OUSD(C) official, DOD did not experience substantial gains or losses in fiscal year 2017. Therefore, DOD did not revise its foreign currency budget rates during fiscal year 2017. However, as previously stated, the official did not provide an explanation as to why the budget rates for fiscal years 2009 through 2013 were not revised. Revised Foreign Currency Budget Rates Decreased the Estimate of DOD’s O&M and MILPERS Funding Needs and Potential Gains and Losses That Would Have Occurred Due to Foreign Currency Fluctuations DOD’s use of revised foreign currency budget rates decreased DOD’s projected O&M and MILPERS funding needs and any potential gains and losses that would have occurred due to foreign currency fluctuations during fiscal years 2014 through 2016. Because DOD uses its budget rates to establish its projected annual O&M and MILPERS funding requirements for planned overseas expenditures, any revisions to the budget rates affect DOD’s estimate of its funding needs. For example, our analysis shows that as a result of revising its budget rates during fiscal years 2014 through 2016, DOD’s projected funding needs for the period of fiscal years 2009 through 2017 decreased from about $60.2 billion to about $57.5 billion—a decrease of about $2.7 billion. To further show the effect that changing foreign currency rates could have on DOD’s projected funding for planned overseas expenditures for fiscal years 2009 through 2017, we also compared DOD’s projected O&M and MILPERS funding needs, based on its initial and revised foreign currency budget rates, against projected funding needs based on the use of foreign currency rates published by Treasury during the fiscal year. Our analysis shows that DOD’s projected O&M and MILPERS foreign currency funding needs using Treasury rates would have been about $58.4 billion, or about $885 million more than the $57.5 billion that DOD had projected using its initial and revised budget rates. DOD also uses foreign currency budget rates to calculate gains or losses attributable to foreign currency fluctuations. Specifically, DOD determines gains and losses due to foreign currency fluctuations by comparing the budget rate (that is, initial or revised budget rate) used to incur obligations against a more current market rate at the time it liquidates its obligations through disbursements. Therefore, revisions to the budget rates not only change DOD’s projected O&M and MILPERS funding requirements for the fiscal year in which the revisions occur, but also change the baseline from which the potential gains or losses would result when DOD liquidates its overseas O&M and MILPERS obligations through disbursements. For example, in fiscal year 2016, Congress reduced DOD’s total appropriations by $1.5 billion. As a result of this reduction and favorable foreign currency rates, DOD revised its fiscal year 2016 budget rates in February 2016 and applied the revised foreign currency budget rates in its calculations of gains and losses due to foreign currency fluctuations since the beginning of the fiscal year to absorb the reduced funding level. In applying the revised budget rates, a $30 million gain DOD had previously projected became a projected loss of about $186.2 million. The use of revised budget rates also affects the movement of funds from the FCFD account. For example, if the use of the revised budget rate creates a loss and DOD is unable to cover the increased costs to its O&M or MILPERS appropriations, funds from the FCFD account may be used to cover its planned overseas expenditures. DOD Has Taken Some Steps to Reduce Costs, but Has Not Fully Explored Additional Opportunities to Achieve Savings When Selecting Foreign Currency Rates DOD has taken some steps to reduce costs in selecting foreign currency rates to liquidate its obligations through disbursements. However, DOD organizations are not always selecting the most cost-effective rates to convert U.S. Dollars, and DOD has not determined whether opportunities exist to achieve additional efficiencies when making disbursements. DOD liquidates its obligations through disbursements for overseas expenditures using Treasury’s ITS.gov system, which provides DOD organizations with a choice of foreign currency rates to apply when making disbursements in a foreign currency. The foreign currency rate chosen determines how many U.S. Dollars must be paid for the transaction. Treasury officials explained that customers may choose either the spot rate or an advanced rate. The spot rate is the price for foreign currencies for delivery in 2 business days. Treasury officials explained that advanced rates are exchange rates that are “locked in” and guaranteed by the bank processing the disbursement 5, 4, or 3 days in advance of payment processing, which is known as the “value date” of a disbursement. Normally, the cost of the rate increases the further from the date of disbursement that it is locked in. While DOD often uses a 5-day advanced rate to make its disbursements, the other rate options available, such as a 3-day advanced and a spot rate, can be more cost-effective. We analyzed data provided by Treasury from its ITS.gov system and found that for disbursements made during the period of June and July 2017, the 5-day advanced rate was more costly than the 3-day advanced rate. In instances where the spot rate was available, we found that it was also more cost-effective than either the 3- day or 5-day advanced rates. For example, for those transactions processed through ITS.gov on June 13, 2017, DOD would have paid 1 U.S. Dollar for .881 European Euros if using the 5-day advanced rate; .883 European Euros if using the 3-day advanced rate; and .889 European Euros if using the spot rate. In the case of the Army, an Army Financial Management Command official provided us information indicating that the service has estimated potential cost savings that would result from more consistently selecting 3-day advanced rates through the ITS.gov system to make overseas disbursements of amounts, rather than the 5-day advanced rate. More specifically, the Army estimated between $8 million and $10 million in annual savings by transitioning from a 5-day to a 3-day advanced rate when selecting foreign currency rates. According to officials, the Army has transitioned all paying locations to the 3-day advanced rate. The Army estimates that these locations have produced $6.04 million in savings through February 2018. Although the Army indicated that it also planned to analyze whether use of the spot rate was feasible, it had not conducted this review at the time of our review. Data provided to us by Treasury from its ITS.gov system indicate that in June and July of 2017, the Air Force used the 5-day advanced rate exclusively for its disbursements, while the Navy and Marine Corps relied on both the 5-day and the 3-day advanced rates. Our analysis of these data show the Air Force would have achieved total savings for those 2 months of about $258,000 if it had made its disbursements using the 3- day versus the 5-day advanced rate. The savings resulting from each transaction varied based on the amount of the transaction. For example, on June 13, 2017, the Air Force disbursed a payment exceeding $3.7 million and would have saved more than $9,000 for that transaction if the 3-day advanced rate had been used. For the same single transaction, if the spot rate had been used instead of the 5-day advanced rate, the Air Force would have saved more than $31,000. The savings associated with the Navy’s and Marine Corps’ disbursements for the same 2-month period showed the potential for less dramatic savings of less than $100 because the Navy and Marine Corps used the 3-day advanced rate as opposed to the 5-day advanced rate for most of its disbursements. Where information on the spot rate was available, its use, as opposed to either the 5-day or 3-day advanced rate, would have resulted in additional savings opportunities for those 2 months. While these examples are illustrative of cost savings opportunities in June and July 2017, Treasury data show that in fiscal year 2016, DOD disbursed more than $11.8 billion through ITS.gov and, as of July 2017, had disbursed more than $9.6 billion through ITS.gov. Our analysis suggests that DOD could achieve further cost savings by more consistently selecting cost-effective foreign currency rates, such as the 3-day advanced or spot rates, with which to make disbursements. In selecting foreign currency rates, DOD’s Financial Management Regulation states that disbursements should be computed to avoid gains or deficiencies (losses) due to fluctuations in rates of exchange to the greatest extent possible. If there is no rate of exchange established by agreement between the U.S. government and the foreign country, then foreign currency transactions are to be conducted at the prevailing rate. The prevailing rate of exchange is the most favorable rate legally available for acquisition of foreign currency for official disbursement and other exchange transactions. Additionally, GAO’s Standards for Internal Control in the Federal Government calls for management to periodically review policies, procedures, and related control activities for continued relevance and effectiveness in achieving the entity’s objectives or addressing related risks. DOD disbursement organizations have flexibility in selecting foreign currency rates to use when making disbursements using ITS.gov. There is no DOD-wide requirement for the services to review the rates used to make disbursements and, except for the Army, the services have not conducted such a review. This step is necessary to determine whether there are opportunities for savings by more consistently selecting cost- effective foreign currency rates. We discussed disbursement processes with DOD and Air Force, Navy, and Marine Corps financial management officials, including the factors considered when selecting foreign currency rates. In addition, a Defense Finance and Accounting Service official noted that currencies can have criteria specifying when a payment is made and provided us the ITS.gov user’s guide, which addresses “special currency requirements,” such as those that would drive advanced payment for a currency. For example, the user’s guide indicates that payment for transactions involving the Afghanistan Afghani must be made 2 days in advance of the value date, and cannot be made on a Friday. However, information that is contained in the ITS.gov user’s guide and that we received from a Treasury official indicate that none of the nine foreign currencies for which DOD budgets place restrictions on when payment must be made; and therefore, this consideration should not drive the use of a specific rate at disbursement. Marine Corps financial management officials told us that the foreign currency rate selected at disbursement is at the discretion of the disbursing officer based on operational requirements, with the understanding that the most favorable rate for the government is the preference, while balancing mission requirements and the time necessary to process the transaction. These officials acknowledged that the 3-day advanced rate can be more cost-effective to the government but indicated that there are occasions when the 5-day advanced rate should be used because it provides more time to process the payments from deployed locations operating in different time zones or with limited communication capabilities. However, we found that OUSD (C) officials and financial management officials with the headquarters of the Air Force, Navy, and Marine Corps were not involved in disbursement, were unaware of what rates were being used at disbursement, and had not reviewed the rationale for selecting one rate over another. For example, Air Force and Navy headquarters officials we spoke with were unable to provide insight as to what drives the decision to use one rate over another. One Navy financial management official told us that he was unaware of any Navy policy that directs a specific rate to be used when disbursing funds, and suggested that the absence of such a policy provides the flexibility for officials to determine which approach is best. Headquarters, Marine Corps officials also stated that they did not monitor foreign currency rates used for disbursements or the reasons why one rate was selected over another. Based on our inquiry, officials indicated that they would analyze the foreign currency rates used for disbursements in 2017 and whether opportunities existed to achieve savings by using other rates available through ITS.gov. A Marine Corps official subsequently provided us with information that showed that two of three disbursing offices that currently utilize ITS.gov for disbursements use the 3-day advanced rate exclusively and one uses the 5-day advanced rate. The official noted that a technical issue within ITS.gov has restricted the disbursing office currently using the 5-day advanced rate from choosing any other rate, but that the service was further assessing options to correct the issue. In our conversations with an official in OUSD(C) about why the other services had not reviewed the foreign currency rates used for disbursements to determine what was being paid through ITS.gov and whether there was an opportunity for savings, the official commented that OUSD(C) had not directed the services to conduct any reviews in this area. This official was unaware that different foreign currency rates were used to make disbursements, and assumed that the military services all make disbursements in the same way. However, as discussed above, the services are using different rates resulting in inconsistency across the department. The official further indicated that DOD could perform a review to determine the cost differences of using one disbursement rate over another. Absent a review of the rates the services are using in making disbursements and whether cost savings could be achieved by more consistently selecting the most cost-effective foreign currency rates available for use at disbursement, DOD is at risk for paying more to convert U.S. Dollars for overseas expenditures than would otherwise be required. DOD Has Used the FCFD Account to Cover Losses Due to Foreign Currency Fluctuations, but Does Not Manage the Account Balance Based on Projected Losses or Quality Data In fiscal years 2009 through 2016, DOD used the FCFD account to cover losses that the services experienced due to foreign currency fluctuations in 6 of the 8 years we reviewed. However, DOD does not effectively manage the FCFD account balance based on projected gains or losses. Transfers of expired unobligated balances from MILPERS and O&M accounts into the FCFD account have been made to replenish the account balance to the statutory limit of $970 million, without consideration of projected losses due to foreign currency fluctuations. Furthermore, DOD’s financial reporting on foreign currency fluctuations for fiscal years 2009 through 2016 contains incomplete and inaccurate information. DOD Has Used the FCFD Account to Cover Losses and Has Maintained the Account at the Maximum Level since Fiscal Year 2012 In fiscal years 2009 through 2016, DOD transferred approximately $1.92 billion out of the FCFD account to cover losses that the services experienced due to foreign currency fluctuations in 6 of the 8 years we reviewed. For these years, DOD transferred funds from the FCFD account to the services’ MILPERS and O&M accounts during the fiscal year in which the funds were obligated for overseas expenses. The transfer amounts were based on both losses realized from actual disbursements and projected losses for any remaining obligations to be liquidated. The projected losses were calculated based on the current foreign currency market rates as of the time of the calculation. Based on the service-level data we reviewed, all of the services reported that they experienced losses in at least 5 of the fiscal years we reviewed. For example, the Army reported that it experienced losses in its MILPERS account for 5 of 8 years, while the Marine Corps reported that it experienced losses in its O&M and MILPERS accounts in each of the 8 years. In addition to the transfers to cover losses within the services’ MILPERS and O&M accounts, in fiscal year 2013 DOD transferred an additional $969 million to the Defense Working Capital Fund to offset fuel cost losses. Since fiscal year 2012, DOD has maintained the FCFD end-of-year account balance at $970 million—the maximum allowed by statute. To replenish the funds that were transferred out of the FCFD account, DOD transferred unobligated balances to the FCFD account from the services’ O&M and MILPERS accounts. While DOD can also replenish the FCFD account or absorb foreign currency losses in certain currencies by transferring to the FCFD account any gains experienced by the services, our analysis found that DOD did not transfer any gains into the FCFD account for fiscal years 2009 through 2016. Figure 3 shows the transfers into and out of the FCFD account and the end-of-year FCFD account balance for fiscal years 2009 through 2016. Our analysis also shows that DOD transferred funds to maintain the FCFD account at its maximum balance since 2012, despite experiencing fewer losses due to foreign currency fluctuations than it had experienced in fiscal years 2009 to 2011. Of the $1.92 billion transferred from the FCFD account to the services’ MILPERS and O&M accounts to cover losses, $464.5 million was transferred since fiscal year 2012, when DOD began maintaining its FCFD account at the maximum level. During that time, some of the services experienced foreign currency gains, while others experienced losses. For example, at the end of fiscal year 2013 the Navy reported a total realized and projected cumulative gain for its O&M and MILPERS accounts of about $98.6 million. In that same year, the Marine Corps reported a cumulative realized and projected loss for its O&M and MILPERS accounts of approximately $12.7 million. Had DOD not transferred unobligated funds back into the FCFD account, it would have retained a positive balance of approximately $505.5 million. However, DOD maintained the account balance at $970 million by transferring approximately $495.3 million in unobligated balances into the account. DOD Analyzes Projected Losses to Inform Transfers out of the FCFD Account, but Does Not Consider Similar Information When Making Transfers into the FCFD Account As part of its management of the FCFD account balance, DOD analyzes data on realized and projected losses as the basis for transferring funds from the FCFD account to the services’ MILPERS and O&M accounts to cover losses. However, DOD does not consider projected losses when making transfers of unobligated O&M and MILPERS balances into the FCFD account. Figure 4 below shows the FCFD account balance that DOD has maintained in relation to the transfers out of the account to cover losses. Specifically, according to the OUSD(C) official responsible for managing the FCFD account, DOD maintains the FCFD account balance at $970 million to maximize unobligated balances within the military services’ O&M and MILPERS accounts before they are canceled and are no longer available to DOD. In addition, this official stated that DOD prefers to maintain the maximum balance in case it is needed due to sudden, unfavorable swings in foreign currency exchange rates. Our review of the documentation used to make transfers into and out of the FCFD account corroborates that DOD maintains the FCFD account balance to maximize the retention of unobligated balances. Specifically, we found instances in which the documentation states that the transfers of unobligated balances into the FCFD account were made for the purpose of replenishing the account balance to the statutory limit. For example, DOD transferred $89 million from the FCFD account to the Army for losses it had realized and projected in fiscal year 2014, and later transferred unobligated balances of the same amount back into the account. DOD’s documentation states that this transfer of unobligated balances was made for the purpose of replenishing the account to $970 million in order to finance estimated foreign currency losses resulting from the decline in value of the U.S. Dollar. However, the transfer to the Army already covered the realized losses and projected losses for any remaining disbursements. In other words, estimated foreign currency losses had already been accounted for at the time of the transfer to the Army. In addition, based on data reported by the Air Force, Marine Corps, and Navy, DOD had an estimated cumulative gain of about $30 million for fiscal year 2014 based on the other services’ gains and losses, which could have been transferred to the FCFD account to absorb any additional foreign currency losses elsewhere. However, DOD did not transfer those gains to the FCFD account. Similarly, based on data reported by these services, DOD experienced cumulative realized and projected gains of more than $200 million in fiscal year 2013 and about $92.6 million in fiscal year 2015, but it did not transfer any gains to the FCFD account because the account balance had already reached its maximum using transferred unobligated balances. Despite replenishing the account balance to the maximum amount for the purpose of covering additional losses, the FCFD transfers have not been made to fully offset losses in some years, further raising questions about the need to maintain the balance at the statutory cap of $970 million annually. Specifically, in 3 of the 6 years in which DOD transferred funds from the FCFD account to the services’ MILPERS and O&M accounts, DOD did not use the FCFD account to fully cover the losses that the Air Force, Marine Corps, and Navy experienced. In fiscal year 2011, for example, DOD’s transfers out of the FCFD account to these services covered about 88 percent of the reported MILPERS and O&M losses that these services had realized and projected to lose by the end of the fiscal year. In fiscal year 2012, FCFD transfers covered almost 72 percent of the MILPERS and O&M realized and projected losses reported by the Air Force, Marine Corps, and Navy, as of the end of the fiscal year. In fiscal year 2016, DOD FCFD transfers to these services covered approximately 55 percent of their reported MILPERS and O&M realized and projected losses by the end of the fiscal year. The OUSD(C) official we spoke with stated that FCFD transfers to cover losses begin with a request from the services, and the OUSD(C) office and the services then coordinate on the final transfer amount. In addition, some service officials told us that they try to cover their losses using each service’s available funding before reaching out for assistance from the FCFD account. Therefore, based on a service’s ability to cover the loss, it may not always request an FCFD transfer to cover the full amount of realized and projected losses. Further, according to an OUSD(C) official, the timing of a service’s request for an FCFD transfer may also affect any differences between the amount transferred and the actual losses experienced. Specifically, if a service requests a transfer early in the fiscal year based on realized and projected losses, actual losses experienced as of the end of the fiscal year may be greater than or less than the transfer amount due to foreign currency fluctuations. Using transfers of unobligated balances, DOD has maintained its FCFD account balance at the maximum level allowed by statute because it has not analyzed realized and projected losses to determine what size account balance is necessary to meet the intended purpose of the account. In our prior work, we have developed key questions for evaluating federal account balances that agencies may use to identify the amount of the balance necessary to maintain agency or program operations. Through examination of carryover balances, oversight of agencies’ management of federal funds may be enhanced. Specifically, we reported that understanding an agency’s processes for estimating and managing carryover balances provides information to assess how effectively agencies anticipate program needs, and ensure the most efficient use of resources. To estimate and manage carryover balances, agencies may consider such factors as future needs of the account, economic indicators, and historical data. If an agency does not have a robust strategy in place to manage carryover balances or is unable to adequately explain or support the reported carryover balance, then a more in-depth review is warranted. In those cases, balances may either fall too low to efficiently manage operations or rise to unnecessarily high levels, producing potential opportunities for those funds to be used more efficiently elsewhere. When asked about maintaining the balance at a level necessary to cover losses, rather than at the maximum level allowed by statute, the OUSD(C) official indicated that the OUSD(C) takes a cautious approach and prefers to have the additional flexibility allowed by the higher balance. Further, the official stated that it would be difficult for DOD to attempt to base its unobligated balance transfers and the FCFD account balance on analysis and evaluation, given the unpredictable nature and constant volatility of foreign currency rates. Our guidelines on evaluating carryover balances acknowledge that external events beyond an agency’s control can dramatically affect carryover balances. However, the challenges that are inherent in predicting foreign currency rates do not preclude DOD from conducting analysis to glean insight as to the appropriate size for the balance of the account and what potential opportunities for savings might exist. Specifically, our guidelines suggest that agencies would benefit from considering the sources and fiscal characteristics of an account with carryover balances. In this case, the FCFD account can receive funds from transfers of unobligated balances and realized foreign currency gains. In addition, DOD can make multiple transfers throughout a fiscal year and can transfer funds from the FCFD to and from the services’ O&M and MILPERS accounts simultaneously, if necessary. These characteristics of the FCFD account already provide the department with flexibility, indicating that DOD may be positioned to manage the FCFD balance in a more analytical manner based on any projected losses. Without analyzing any realized or projected losses to determine what balance may be needed to meet the FCFD account’s intended purpose, the account balance may be kept at a higher level than is necessary. As a result, although an exact amount is unknown, DOD may be maintaining balances in the FCFD account that are hundreds of millions of dollars higher than needed to cover any losses it has experienced, and these funds may have been more efficiently used in supporting other defense activities or returned to Treasury after the account is canceled by law. DOD Lacks Quality Data to Support Management of the FCFD Account DOD prepares financial reports to monitor the status of its foreign currency funds, but some of DOD’s financial reporting on foreign currency fluctuations for fiscal years 2009 through 2016 is incomplete and inaccurate. DOD’s Financial Management Regulation establishes reporting requirements specifically for tracking all transactions that increase or decrease the FCFD. In accordance with that guidance, the services provide data from their accounting systems to the Defense Finance and Accounting Service to generate reports that are used as a tool with which the services and OUSD(C) can monitor how they are expending funds appropriated for overseas expenditures. For O&M appropriations, the Foreign Currency Fluctuations, Defense (O&M) report provides data on foreign currency gains and losses for each service, by currency, including data on projected gains or losses for any remaining obligations that have not yet been disbursed at the time of the report. The Foreign Currency Fluctuations, Defense Report (MILPERS) provides similar information for the MILPERS appropriation. We reviewed end-of-year Foreign Currency Fluctuations, Defense (O&M) and (MILPERS) reports for fiscal years 2009 through 2016 and found that some of the reporting for O&M was incomplete and inaccurate, which hampers the quality of information available to manage the FCFD account. For instance, we found the following: Incomplete data in the Foreign Currency Fluctuations Defense (O&M) reports: In our review of the end-of-year Foreign Currency Fluctuations Defense (O&M) and (MILPERS) reports we observed several instances of incomplete data in the O&M reports, and these affect managers’ ability to make sound decisions to manage foreign currency gains and losses. First, for the Navy, we found that the report data showed, for multiple currencies across fiscal years 2011 through 2016, values in the realized variance column, indicating that the service had experienced a gain or loss in a particular currency; however, the reports showed values of zero in other columns that are necessary for calculating the gain or loss. Second, the Air Force data for the Turkey Lira, in fiscal year 2012, showed a gain or loss without any data indicating what would have driven the gain or loss. Third, in one instance, Marine Corps data on obligations for fiscal year 2011 were missing from the end-of-year reports until 2014. Missing obligation data for these end-of-year reports indicate a limitation in using these reports for tracking actual gains and losses. Inaccurate data in the Army’s Foreign Currency Fluctuations Defense (O&M) reports: The Army’s Foreign Currency Fluctuation Defense (O&M) reports are inaccurate and cannot be used to reliably track gains or losses, and this hinders managers from making sound decisions regarding the Army’s foreign currency gains and losses. The reports are inaccurate in that the Army’s accounting system charges disbursements to the current fiscal year appropriation rather than to the fiscal year appropriation that incurred the obligation, as required by the Financial Management Regulation. According to officials from the Army Budget Office, the Army designed its General Fund Enterprise Business System (GFEBS) to record disbursements to the current fiscal year based on differing interpretations of a previous version of the Regulation. Because the Army is not recording its disbursements to the fiscal year appropriation as the other services are, Army data are inaccurate and cannot be used by the OUSD(C) official responsible for overseeing DOD’s foreign currency program to track the Army’s foreign currency transactions and maintain full visibility of DOD’s overall gains and losses in a given fiscal year. Army Budget Office officials acknowledged that the Army will need to modify its system to record disbursements consistent with Financial Management Regulation guidance, but it has not developed a plan or timeline for doing so. Without accurate reporting of the Army’s foreign currency transactions, DOD lacks information for tracking and helping to manage the Army’s foreign currency gain and losses. DOD’s Financial Management Regulation specifies the data that must be included in the Foreign Currency Fluctuations Defense (O&M) and (MILPERS) reports and the roles and responsibilities of the services as well as the Defense Finance and Accounting Service for ensuring the quality of those data. However, we identified data issues in our analysis that indicate that quality is inconsistent. For example, officials from the Navy stated that they had observed the incomplete data for some currencies and speculated that the incompleteness was attributable to data entry errors. Similarly, according to an OUSD(C) official, the Defense Finance and Accounting Service is notified when discrepancies are found in the reports and the Defense Finance and Accounting Service officials coordinate with the services to correct the data. However, neither Navy nor the Defense Finance and Accounting Service officials have corrected the data. Although DOD’s Financial Management Regulation specifies the data that are to be included, as well as roles and responsibilities of the services and the Defense Finance and Accounting Service, it does not identify who is responsible for correcting erroneous or missing data. According to an OUSD(C) official, correcting reporting issues is an area that OUSD(C), the Defense Finance and Accounting Service, and the services can improve on, and they would benefit from guidance in the Financial Management Regulation that establishes the steps that should be taken for making such corrections. Further, GAO’s Standards for Internal Control in the Federal Government and the Federal Accounting Standards Advisory Board’s Handbook of Federal Accounting Standards and Other Pronouncements, as Amended, both establish the importance of using reliable and complete information for making decisions. In addition, DOD’s Financial Management Regulation establishes responsibilities for both the DOD components and the Defense Finance and Accounting Service to establish appropriate internal controls to ensure that financial reporting data are complete, accurate, and supportable, in order for managers to make sound decisions and exercise proper stewardship over these resources. Effectively managing foreign currency gains and losses as well as any projected gains or losses for any remaining obligations that have not yet been liquidated through disbursement requires complete and accurate data. OUSD(C) and service officials recognize the importance of reliable data, as well as the need to take steps to improve the quality of the foreign currency gains and losses data. Without OUSD(C) establishing guidance to ensure that the Foreign Currency Fluctuation Defense (O&M) report data that tracks foreign currency gains and losses are complete, DOD and Congress do not have information to make sound decisions and exercise proper stewardship over resources due to foreign currency fluctuations. Furthermore, until the Army establishes a plan and timeline for modifying its system to record foreign currency disbursements in an accurate manner, the Army and DOD will lack quality information for tracking and helping to manage the Army’s and DOD’s foreign currency gain and losses. Conclusions Congress provides DOD with a significant amount of funding each year to purchase goods and services overseas and to pay service-members stationed abroad. DOD develops and can revise foreign currency budget rates to determine its funding needs and calculate any gains or losses that result from DOD’s overseas expenditures. The Army has estimated potential cost savings that would result from more consistently selecting a more cost-effective foreign currency rate for making disbursements to liquidate its overseas O&M obligations. However, DOD has not fully determined whether additional cost-saving opportunities exist because the services have not reviewed the rates used for foreign currency disbursements. Absent a review of the foreign currency rates the services are using at disbursement, including whether cost-saving opportunities exist, by more consistently selecting cost-effective foreign currency rates, DOD risks paying more than would be required otherwise. Further, while DOD has used the FCFD account to cover losses that resulted from foreign currency fluctuations, it has not managed the FCFD account balance by basing the transfers of unobligated balances into the FCFD account on an analysis of realized and projected losses. Without basing its FCFD account balance on such analyses, DOD may be maintaining balances in the FCFD account that are hundreds of millions of dollars higher than needed to cover any losses it has experienced, and these amounts may have been more efficiently used supporting other defense activities or ultimately returned to Treasury, once expired. Moreover, DOD has not established guidance and other procedures to ensure that complete and accurate data are included in financial reporting on foreign currency funds, and this limits the quality of information available to effectively manage the FCFD account. Recommendations for Executive Action We are making the following four recommendations to DOD. The Secretary of Defense ensures that: The Under Secretary of Defense (Comptroller), in coordination with the U.S. Army, Air Force, Navy, and Marine Corps, should conduct a review of the foreign currency rates used at disbursement to determine whether cost-saving opportunities exist by more consistently selecting cost- effective rates at disbursement. (Recommendation 1) The Under Secretary of Defense (Comptroller) should analyze realized and projected losses to determine the necessary size of the FCFD account balance and use the results of this analysis as the basis for transfers of unobligated balances to the account. (Recommendation 2) The Under Secretary of Defense (Comptroller) should revise the Financial Management Regulation to include guidance on ensuring that data are complete and accurate, including assignment of responsibility for correcting erroneous data in its Foreign Currency Fluctuations Defense (O&M) reports. (Recommendation 3) The Secretary of the Army should develop a plan with timelines for implementing changes to its General Fund Enterprise Business System to accurately record its disbursements, consistent with DOD Financial Management Regulation guidance. (Recommendation 4) Agency Comments and Our Evaluation We provided a draft of this report to DOD for review and comment. In its written comments, reproduced in appendix II, DOD concurred with our first, third, and fourth recommendations and outlined its plan to address them. DOD partially concurred with our second recommendation that the Under Secretary of Defense (Comptroller) analyze realized and projected losses to determine the necessary size of the FCFD account balance and use the results of the analysis as the basis for transfers of unobligated balances to the account. DOD also provided technical comments, which we incorporated in the report, where appropriate. In partially concurring with our second recommendation, DOD stated that projecting foreign currency gains or losses can be difficult given that foreign currency rates can be volatile due to various factors, such as trade balances, money supply, and national income, as well as arbitrary disturbances that affect foreign currency rates that cannot be predicted or forecasted, such as the departure of the United Kingdom from the European Union. DOD noted that because of the risk and volatility associated with foreign currency rates, the Congress established the FCFD account. We agree that forecasting foreign currency rates is challenging due to market volatility and include examples in our report of the effect of foreign currency rate fluctuations on DOD’s planned foreign currency obligations. Our report also describes the relationship between gains and losses and foreign currency fluctuations, and the movement of funds from the FCFD account to offset any losses. As our report also discusses, DOD calculates actual and projected losses due to foreign currency fluctuations and uses those projections as the basis, at least in part, for any transfers out of the FCFD account to cover losses experienced in the military services’ O&M and MILPERS appropriations. However, our report also notes that DOD does not consider its calculations of actual and future projected losses when making transfers of unobligated O&M and MILPERS balances to replenish the FCFD account. Instead, since fiscal year 2012, DOD has kept the FCFD account balance at the maximum level allowed by statute by using unobligated balances before they are canceled and are no longer available to DOD, regardless of whether the funds were needed in the account to offset any projected losses. DOD’s comments also stated that projecting gains or losses for foreign currency to determine the size of the FCFD account opens the door to greater uncertainty and risk at a time when the department is working to rebuild readiness and implement the National Defense Strategy. Our report describes the characteristics of the FCFD account that provide DOD with flexibility to manage market volatility, thereby helping to address uncertainty and reduce risk. For example, DOD can make multiple transfers of funds to the FCFD account throughout a fiscal year in response to unforeseen foreign currency fluctuations. The FCFD account can also receive funds from transfers of actual foreign currency gains and/or unobligated balances. As we also noted, DOD made use of its authority to transfer expired unobligated MILPERS and O&M amounts into the FCFD account in the event that actual losses exceeded the projected amounts and additional transfers were deemed necessary. We continue to believe that by analyzing actual and projected losses and basing the transfer of any unobligated balances on these losses, DOD would be better positioned to determine the size of the FCFD account balance that is necessary to meet its intended purpose. Further, such analyses would provide opportunities to more efficiently use unobligated balances for other defense activities or return the balances to Treasury. We are sending copies of this report to the Secretary of Defense, the Under Secretary of Defense (Comptroller), the Secretary of the Army, the Secretary of the Navy, the Secretary of the Air Force, the Commandant of the Marine Corps, and appropriate congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5431 or russellc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Appendix I: Scope and Methodology To describe the Department of Defense’s (DOD) revised foreign currency budget rates since 2009 and the relationship between the revised budget rates and DOD’s projected Operation and Maintenance (O&M) and Military Personnel (MILPERS) funding needs, we reviewed DOD’s foreign currency budget rates for the period of fiscal years 2009 through 2017, and we identified any years during which DOD revised the initial budget rates. We compared DOD’s initial foreign currency budget rates and revised foreign currency budget rates with rates published by the U.S. Treasury Department (Treasury) for fiscal years 2009 through 2017. This period corresponded with data available to us on DOD’s initial and revised rates and allowed for use of the most current data available, since DOD had not yet decided whether or not to revise the fiscal year 2018 budget rates, while we were conducting our audit work. We chose rates published by Treasury for this comparison because Treasury has the sole authority to establish for all foreign currencies or credits the exchange rates at which such currencies are to be reported by all agencies of the government. Because Treasury rates are issued quarterly, we averaged Treasury’s first and second quarter rates for each currency and compared the Treasury average with DOD’s initial budget rates. Similarly, we computed an average of the third and fourth quarter Treasury rates for each currency and compared them with the DOD initial or revised budget rates, where applicable. These comparisons are meant to show the difference between DOD’s budget rates and Treasury rates for the first 6 months of the fiscal year, and the difference between DOD’s revised exchange rates and Treasury rates for the last 6 months of the fiscal year. Further, we analyzed the extent to which DOD’s budget rates were within 10 percent of Treasury rates during these same years. We chose 10 percent as the basis for our analysis because Treasury’s guidance states that amendments to the quarterly rates will be published during the quarter to reflect significant changes in the quarterly data, such as rate changes of 10 percent or more. Additionally, to understand the effect that revising the budget rates had on DOD’s O&M and MILPERS funding estimates and on potential gains or losses due to foreign currency fluctuations, we used a three-step approach. First, we identified the amount of O&M and MILPERS funds DOD requested for each currency. We converted the U.S. Dollars requested to the total amount of foreign currency needed by multiplying the U.S. Dollars requested by DOD’s initial budget rate. Second, we determined the total amount of U.S. Dollars required using the revised rates by dividing the total amount of foreign currency needed using DOD’s initial budget rate by DOD’s revised budget rate. We used this same approach to determine the total amount of U.S. Dollars required using the average Treasury rates. Third, we computed the differences in DOD’s O&M and MILPERS foreign currency funding needs by subtracting the U.S. Dollars required to meet its foreign currency needs based on the average Treasury rates from the amounts required based on DOD’s initial budget rates and DOD’s revised budget rates, respectively. We discussed further with officials from the Office of the Under Secretary of Defense, Comptroller (OUSD(C)) the factors considered in revising the rates and whether those factors are communicated within and outside of the department. To evaluate the extent to which DOD has taken steps to reduce costs in selecting foreign currency rates at which to make disbursements and determine whether opportunities exist to gain additional savings, we reviewed accounting standards and any guidelines that exist regarding disbursements and calculations of foreign currency gains and losses, such as DOD’s Financial Management Regulation 7000.14-R, which calls for the use of prevailing foreign currency rates to make disbursements. We also discussed with agency officials how those guidelines are being carried out, and whether DOD or the services have developed guidance that instructs the services in selecting rates used for disbursements in foreign currencies. Additionally, we examined a non-generalizable selection of data for DOD disbursements made during the months June and July 2017 from Treasury’s International Treasury Service (ITS.gov) system to determine which rates DOD used during this period and what savings might be achievable from using alternate rates. We chose data from those 2 months because it was the most recent data available on disbursements at the time Treasury provided the data for our review. Additionally, we discussed with officials from OUSD(C) and the services any analysis and ongoing efforts to transition to more cost-effective rates, including savings that may result. To assess the extent to which DOD has effectively managed the Foreign Currency Fluctuations, Defense (FCFD) account to cover losses, and maintained quality information to manage these funds, we analyzed DOD data for fiscal years 2009 through 2016 on foreign currency gains and losses reported by each of the services as reported in their Foreign Currency Fluctuation, Defense (O&M) and (MILPERS) reports; movements of funds between the FCFD account and the services’ O&M and MILPERS accounts; and the end-of-year FCFD account balances. We chose this time period in order to capture years in which both gains and losses were experienced, and for which DOD had complete data on gains and losses, fund transfers, and end-of-year balances for the FCFD account. Because the Army charges disbursements to the current fiscal year appropriation instead of the fiscal year appropriation that incurred the obligation, we requested that the Army adjust its reported data on foreign currency gains and losses and provide information consistent with how the other services report them, and with DOD’s Financial Management Regulation. However, the Army was unable to provide us with data that were consistent with what was provided by the other services at the time of our review. We, therefore, were unable to use Army data for purposes of comparison with data provided by the other services. We compared the end-of-year FCFD account balances and the use of the account with guidelines established in our prior work on the importance of examining unobligated balances. Additionally, we reviewed and analyzed DOD financial reports on foreign currency gains or losses and compared the reports, including any identified discrepancies, against best practices and standards on accurate reporting and maintaining quality information, such as those in GAO’s Standards for Internal Control in the Federal Government, and the Federal Accounting Standards Advisory Board’s Handbook of Federal Accounting Standards and Other Pronouncements, as Amended. To determine the reliability of the data used in addressing these objectives, we analyzed DOD and Treasury foreign currency rates, data on DOD foreign currency disbursements, and DOD financial reporting data on foreign currency gains and losses to identify any missing or inaccurate information, and we discussed with agency officials any identified abnormalities and how the information was extracted from systems, when appropriate. We found the data to be sufficiently reliable for the purposes of our reporting objectives, with the exception of the financial reporting on financial gains and losses. Specifically, based on problems with the completeness and accuracy of DOD’s financial reporting on foreign currency gains and losses, we found that these data were not sufficiently reliable for the purpose of computing exact totals for the gains and losses DOD experienced. However, because DOD uses these data as the basis for decisions related to management of the FCFD account, we included the data in our analysis to provide insight into the scope of gains and losses experienced. We also spoke with OUSD(C), military service, and Treasury officials regarding the process and systems used to input the reviewed data and generate the foreign currency reports we reviewed. We conducted this performance audit from February 2017 to April 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments: In addition to the contact named above, Matt Ullengren, Assistant Director; and Tulsi Bhojwani, Justin Bolivar, Carol Bray, Amie Lesser, Kelly Liptan, Felicia Lopez, Leah Nash, Randy Neice, Jacqueline McColl, Mike Silver, Roger Stoltz, Susan Tindall, John Trubey, Elaine Vaurio, and Cheryl Weissman made key contributions to this report.
Why GAO Did This Study DOD requested about $60 billion for fiscal years 2009 - 2017 to purchase goods and services overseas and reimburse service-members for costs incurred while stationed abroad. DOD uses foreign currency exchange rates to budget and pay (that is, disburse amounts) for these expenses. It also manages the FCFD account to mitigate a loss in buying power that results from foreign currency rate changes. GAO was asked to examine DOD's processes to budget for and manage foreign currency fluctuations. This report (1) describes DOD's revision of its foreign currency budget rates since 2009 and the relationship between the revised rates and projected O&M and MILPERS funding needs; (2) evaluates the extent to which DOD has taken steps to reduce costs in selecting foreign currency rates to disburse funds to liquidate O&M obligations, and determined whether opportunities exist to gain additional savings; and (3) assesses the extent to which DOD has effectively managed the FCFD account balance. GAO analyzed data on foreign currency rates, DOD financial management regulations, a non-generalizable sample of foreign currency disbursement data, and FCFD account balances. What GAO Found The Department of Defense (DOD) revised its foreign currency exchange rates (“budget rates”) during fiscal years 2014 through 2016 for each of the nine foreign currencies it uses to develop its Operation and Maintenance (O&M) and Military Personnel (MILPERS) budget request. These revisions decreased DOD's projected O&M and MILPERS funding needs. DOD's revision of the budget rates during these years also decreased the expected gains (that is, buying power) that would have resulted from an increase in the strength of the U.S. Dollar relative to other foreign currencies. DOD did not revise its budget rates in fiscal years 2009 through 2013. For fiscal year 2017, DOD changed its methodology for producing budget rates, resulting in rates that were more closely aligned with market rates. According to officials, that change made it unnecessary to revise the budget rates during the fiscal year. DOD has taken some steps to reduce costs in selecting foreign currency rates used to pay (that is, disburse amounts) for goods and services, but DOD has not fully determined whether opportunities exist to achieve additional savings. The Army has estimated potential savings of up to $10 million annually by using a foreign currency rate available 3 days in advance of paying for goods or services rather than a more costly rate available up to 5 days in advance. The Army has converted to the use of a 3-day advanced rate. GAO's analysis suggests that DOD could achieve cost savings if the services reviewed and consistently selected the most cost-effective foreign currency rates when paying for their goods and services. Absent a review, DOD is at risk for paying more than it would otherwise be required to conduct its transactions. DOD used the Foreign Currency Fluctuations, Defense (FCFD) account to cover losses (that is, less buying power) due to unfavorable foreign currency fluctuations in 6 of the 8 years GAO reviewed. Since 2012, DOD has maintained the FCFD account balance at the statutory limit of $970 million, largely by transferring unobligated balances before they are cancelled from certain DOD accounts into the FCFD. However, DOD has not identified the appropriate FCFD account balance needed to maintain program operations by routinely analyzing projected losses and basing any transfers into the account on those expected losses. Thus, DOD may be maintaining balances that are hundreds of millions of dollars higher than needed, and that could have been used for other purposes or returned to the Treasury Department (see figure). What GAO Recommends GAO is making four recommendations, including that DOD review opportunities to achieve cost savings by more consistently selecting the most cost-effective foreign currency rates used for the payment of goods and services, and analyze projected losses to manage the FCFD account balance. DOD generally concurred with the recommendations.
gao_GAO-19-125
gao_GAO-19-125_0
Background VA’s mission is to promote the health, welfare, and dignity of all veterans in recognition of their service to the nation by ensuring that they receive medical care, benefits, social support, and lasting memorials. In carrying out this mission, the department manages one of the largest health care delivery systems in the United States that provides enrolled veterans with a full range of services. These services may include primary care; mental health care; and outpatient, inpatient, and residential treatment. VHA, one of the department’s three major components, is responsible for overseeing the provision of health care at all VA medical facilities. IT is widely used and critically important to supporting the department in delivering health care to veterans. As such, VA operates and maintains an IT infrastructure that is intended to provide the backbone necessary to meet the day-to-day operational needs of its medical centers and other critical systems supporting the department’s mission. The infrastructure is to provide for data storage, transmission, and communications requirements necessary to ensure the delivery of reliable, available, and responsive support to all VA staff offices and administration customers, as well as veterans. VA Has Begun to Acquire a New System after a Long History of Efforts to Modernize VistA Over nearly 2 decades, VA pursued multiple efforts to modernize VistA. However, these efforts were abandoned due to expectations of high costs and challenges to ensuring interoperability of health data. Beginning in December 2013, the department initiated VistA Evolution, a joint program between OIT and VHA that focused on implementing a collection of projects to improve the efficiency and quality of veterans’ health care. Specifically, it focused on modernizing the VistA system, increasing the department’s data exchange and interoperability with DOD and private sector health care partners, and reducing the time it takes to deploy new health information management capabilities. The VistA 4 Roadmap was the key plan that the department used to guide VistA Evolution. According to this plan, VistA Evolution was intended to result in lower costs for system upgrades, maintenance, and sustainment. As part of VistA Evolution, the department initiated work to, among other things, standardize VistA instances; expand the use and functionality of the Joint Legacy Viewer; and release enhancements to legacy scheduling, pharmacy, and immunization systems. For example, one focus of the VistA Evolution program over the last several years was to standardize a core set of the system’s modules which, according to the department, account for about 60 percent of VistA. As part of these efforts, the department implemented a process to assess variances in the system at individual sites. According to OIT officials, this process led to more standardization of the code, where possible, and also allowed sites to apply for a waiver if there was a need to continue to operate a nonstandardized VistA instance. Although VistA Evolution was intended to modernize aspects of the system through December 2018, the planned scope of work was reduced as VA redirected the department’s efforts. Specifically, in June 2017, the former VA Secretary announced a significant shift in the department’s approach to modernizing VistA. Rather than continue to use the system, the Secretary stated that the department planned to acquire the same EHR system that DOD is acquiring—Cerner Millennium. According to the department, it has chosen to acquire this product because Cerner Millennium should allow the entire department’s and DOD’s patient data to reside in one system, thus, potentially reducing or eliminating the manual and electronic exchange and reconciliation of data between two separate systems. Accordingly, the department awarded an indefinite delivery, indefinite quantity contract to Cerner in May 2018 for a maximum amount of $10 billion over 10 years. Cerner is to replace the 130 instances of VistA with a standard COTS system to be implemented across VA. This new system is to support a broad range of health care functions including acute care, clinical decision support, dental care, and emergency medicine. When implemented, the new system will be expected to become the authoritative source of clinical data to support improved health, patient safety, and quality of care provided by VA. The EHRM program is responsible for managing the Cerner contract implementation. As of June 2019, the department had issued eight task orders to Cerner to: provide project management and planning support services, conduct site assessments at the initial operating capability sites, host the Cerner system and supporting data, perform data migration and enterprise interface development, develop a functional baseline, deploy the Cerner system at the initial operating capability sites, analyze, design, and develop a technical baseline, and provide additional interface development. For fiscal year 2019, the program was appropriated about $1.1 billion for planning and managing the transition from VistA to Cerner. VA’s Office of the Deputy Secretary approves spending on EHRM activities according to the appropriation. Further, according to the department, funds are tracked as a major IT investment on the Office of Management and Budget’s Federal IT Dashboard. According to VA documentation, the EHRM program is to provide management support and the infrastructure modernization required to install and operate the new system. Further, the department has estimated that an additional $6.1 billion in funding, above the Cerner contract amount, will be needed to fund additional project management support supplied by outside contractors, government labor costs, and infrastructure improvements over the 10-year contract period. Each VA medical facility is expected to continue using VistA until the new system has been deployed. VA plans to deploy the new EHR system at three initial operating capability sites within 18 months of October 1, 2018, with a phased implementation of the remaining sites over the next decade. The three initial deployment sites, located in the Pacific Northwest, are the Mann- Grandstaff, American Lake, and Seattle VA Medical Centers and related clinical facilities that operate the same instances of VistA. These are the first locations where the system is expected to “go live.” The task order to deploy the Cerner system at the three initial sites provides a detailed description of the steps Cerner needs to take in order to reach initial operating capability at the Mann-Grandstaff site in March 2020, and at the Seattle and American Lake sites in April 2020. According to the schedule, the initial operating capability sites are expected to be operational by July 2020. GAO Has Previously Reported on VA’s Challenges in Managing Health IT and VistA Modernization In 2015, we designated VA health care as a high-risk area for the federal government, and we continue to be concerned about the department’s ability to ensure that its resources are being used cost-effectively and efficiently to improve veterans’ timely access to health care. In part, we identified limitations in the capacity of VA’s existing IT systems, including the outdated, inefficient nature of key systems and a lack of system interoperability, as contributors to the department’s challenges related to health care. In our 2019 update to the high-risk series, we stressed that VA should demonstrate commitment to addressing its IT challenges by stabilizing senior leadership, building capacity, and finalizing its action plan for addressing our recommendations and establishing metrics and mechanisms for assessing and reporting progress. We have also issued numerous reports over the last decade that highlighted the challenges facing VA in modernizing VistA and improving EHR interoperability with DOD. For example, Between July 2008 and January 2010, we issued a series of reports related to provisions included in the National Defense Authorization Act for Fiscal Year 2008 that required VA and DOD to, among other things, jointly develop and implement fully interoperable EHR systems or capabilities and establish an Interagency Program Office to be a single point of accountability for their efforts. These reports summarized progress made over time to set up the program office, but also noted that the office was not positioned to function as a single point of accountability for the delivery of the future interoperable capabilities that the departments were planning. In March 2011, the Secretaries of VA and DOD committed the two departments to the development of a new common integrated electronic health record (iEHR) system and, in May 2012, announced their goal of implementing it across the departments by 2017. However, in February 2014, we reported on the departments’ decision to abandon their plans for the iEHR. Specifically, we reported that the Secretaries of VA and DOD, citing challenges in the cost and schedule for developing the iEHR, had announced that they would not continue with the new system and would, instead, pursue separate efforts to modernize or replace their existing systems and work to ensure interoperability between them. Further, we reported that the departments had not addressed management barriers to effectively collaborate on their joint health IT efforts. We made recommendations regarding, among other things, developing a plan to describe the schedule, cost, and roles and responsibilities for the organizations within VA and DOD involved in acquiring, developing, and implementing the EHR systems. The departments agreed with these recommendations and took steps to address them. We reported in August 2015 that VA and DOD, with guidance from the Interagency Program Office, had taken actions to increase interoperability between their EHR systems. However, the office had not yet specified outcome-oriented metrics and established related goals that are important to gauging the impact that interoperability capabilities have on improving health care services for shared patients. As a result, we made several recommendations to VA and DOD to address these deficiencies and the departments agreed with them. VA, DOD, and the Interagency Program Office subsequently took actions that addressed the recommendations. In a June 2018 testimony, we noted that VA had undertaken important analyses to better understand the scope of VistA and identify capabilities that can be provided by the Cerner system. The department also had other key activities underway, such as establishing program governance and EHRM program planning. We noted that critical success factors could serve as a model of best practices that VA could apply to enhance the likelihood that the acquisition of the new system would be successfully achieved. Further, in a September 2018 testimony, we summarized our previously reported findings on the establishment and evolution of the DOD/VA Interagency Program Office, which has been involved in various approaches to increase health information interoperability between the departments. We noted that the office had not been effectively positioned to function as the single point of accountability for the departments’ EHR system interoperability efforts as called for in the National Defense Authorization Act for Fiscal Year 2008. As a result of these findings, we recommended that VA clearly define the role and responsibilities of the Interagency Program Office within the governance plans for acquisition of the department’s new EHR system. The department agreed with the recommendation and stated that the Joint Executive Council, a joint governance body comprised of leadership for both VA and DOD, had approved a role for the office. However, as of June 2019, additional work was ongoing to clarify the role of the Interagency Program Office in VA’s EHR acquisition. VA Has Undertaken Efforts to Define VistA, but Additional Work Remains In order to maintain internal control activities over an IT system and its related infrastructure, organizations should be able to define physical and performance characteristics of the system, including descriptions of the components and the interfaces. Further, consistent with GAO’s Cost Estimating and Assessment Guide, a comprehensive system definition should identify customization and the environment in which the system operates. While defining a complex IT system can be challenging, having an adequate understanding of its characteristics will better position the organization to comprehensively project and account for costs over the life of a system or program as well as identify specific technical and program risks. Definition of VistA remains important because VA plans to continue using the system during the department’s decade-long transition to the Cerner system. VA maintains multiple documents and a database that describe parts of VistA, including various components and interfaces. However, despite these existing sources, OIT officials acknowledged that there is no comprehensive definition of the VistA system. Consequently, VA has completed a number of efforts to better define VistA and understand the environment in which it operates and additional work is planned in the future. Specifically, VA has documented descriptions of the system, including the components that comprise it. These descriptions are documented in multiple sources: the VA Monograph, VA Systems Inventory, and VA Document Library. The VA Monograph is a document maintained by OIT that provides an overview of VistA and non-VistA applications used by VHA. According to VHA officials, the VA Monograph is the primary document that describes the components of the system. The Monograph describes VistA in terms of modules. For modules identified, including VistA modules, information such as the associated business functions, VA Systems Inventory identification number, and a link to the VA Document Library for additional technical information are provided. The VA Systems Inventory is a database maintained by OIT that identifies current IT systems at the department, including systems and interfaces related to VistA. For systems identified, the database includes information such as the system name, the system status (i.e., active, in development, or inactive), and related system interfaces. The VA Document Library is an online resource for accessing documentation (i.e., user guides and installation manuals) on the department’s nationally released software applications, including VistA. VA has also taken steps to further define the system in its efforts to understand VistA and the environment in which it operates. For example, EHRM program officials recognized the need to further understand the customization of VistA components at the various medical facilities and have conducted analyses to do so. These analyses include: Variance analysis: As part of its VistA Evolution program, which has focused on standardizing a core set of VistA functionality, the department implemented a process to compare the instances of VistA installed at sites to the Enterprise Standard version. The results of this analysis allowed the department to assess the criticality of each variance, which is expected to help with VA’s transition to the Cerner system. Module analysis: EHRM program subject matter experts undertook an analysis that involved reviewing and assessing capabilities provided by VistA modules. This analysis enabled department officials to determine whether the capability provided by a VistA module could be provided by the Cerner system, or whether another COTS solution would be required to support this function going forward. Visual mapping: EHRM program officials also directed an analysis that involved developing a notional visual mapping of VA’s health care applications, components, and supporting systems within the health delivery environment. The results of this analysis provided a description of the current state of one instance of VistA and the VA health environment, which is intended to inform the department of possible opportunities for business process and IT improvements as it proceeds with the Cerner acquisition. Nevertheless, even with these analyses, VA has not yet fully defined VistA, including, for example, identifying performance characteristics of the system and describing the environment in which it operates. The department’s three sources that describe VistA and the additional analyses undertaken do not provide insight into site specific customizations of the system. For example, the VA Monograph does not include information on module customization at local facilities. In addition, according to OIT officials, the systems inventory does not reflect differences among the 130 different instances of VistA and does not take into consideration regional and local customizations of related components. Further, the visual mapping analysis noted that there was not full insight of the intertwined structure of data and applications or the various local customizations of VistA. EHRM program officials stated that they have not been able to fully define VistA and understand all local customizations due to the decentralization of the development of the system and its evolution over more than 30 years. They explained that VistA’s complexity is partly due to the various instances of the system, compounded by local customizations, which have resulted in differences in VistA instances operating at various facilities. According to EHRM program documentation, Cerner’s contract calls for the company to conduct comprehensive assessments to capture the current state of technical and clinical operations at specific facilities, as well as identify site-specific requirements where the Cerner system is planned to be deployed. As of June 2019, Cerner had completed site assessments for the three initial operating capability sites in the Pacific Northwest and had planned additional assessments at future deployment sites. The initial site assessments included, among other things, an assessment of the unique VistA instances and the environment in which the system operates. The continuation of planned site assessments should provide a thorough understanding of the 130 VistA versions, help the department better define VistA, and position it for transitioning from VistA to Cerner’s COTS solution. VA Identified Total VistA Costs of about $2.3 Billion between 2015 and 2017, but Could Not Sufficiently Demonstrate the Reliability of All Data and Omitted Other Costs When using public funds, an agency must employ effective management practices in order to let legislators, management, and the public know the costs of programs and whether they are achieving their goals. To make those evaluations for a program or for a system as large and complex as VistA, a complete understanding of the system and reliable cost information is required. By following a methodology and utilizing reliable data, an agency can ensure that all costs are fully accounted for, which in turn, better informs management decisions, establishes a cost baseline, and enhances understanding of a system’s performance and return on investment. Fundamental characteristics of reliable costs are that they should be accurate (unbiased, not overly conservative or optimistic), well- documented (supportable with source data, clearly detailed calculations, and explanations for choosing a particular calculation method), credible (identifying any uncertainty or biases surrounding data or related assumptions), and comprehensive (costs are neither omitted nor double counted). Identification of VistA’s costs remains important because VA plans to continue using the system during the department’s transition to the Cerner system over the next decade. VA identified costs for VistA and its related activities adding up to approximately $913.7 million, $664.3 million, and $711.1 million in fiscal years 2015, 2016, and 2017, respectively—for a total of about $2.3 billion over the 3 years. However, of the $2.3 billion, the department was only able to demonstrate that approximately $1 billion of these costs were reliable. The department could not sufficiently demonstrate the reliability of the remaining approximately $1.3 billion of VistA costs that it identified. In addition, VA identified other categories of VistA-related costs, but omitted these costs from the total. VA Did Not Sufficiently Demonstrate the Reliability of Data for All VistA Costs Of the $2.3 billion total costs for VistA, VA demonstrated that only approximately $1 billion of these costs were reliable. Specifically, OIT officials identified VistA-related costs within seven categories. The officials were able to sufficiently explain why these categories were included in the development and sustainment costs for VistA and how they were documented by the department; the officials also presented detailed source data for our examination. As a result of our review, we determined that the cost data for these seven categories were accurate, well-documented, credible, and comprehensive and, thus, sufficiently reliable. Table 1 provides a summary of the program costs identified for VistA by OIT and VHA for fiscal years 2015 through 2017 that we determined to be reliable. As shown in the table, VA identified costs for the following seven categories for fiscal years 2015 through 2017: VistA Evolution – The VistA Evolution program costs were associated with VistA strategy, system design, product development, and program management. These costs totaled approximately $549.6 million. Interoperability – The Interoperability program focused on sharing electronic health data between VA and non-VA facilities, including private sector providers and DOD. For example, interoperability costs were associated with architecture, strategy, the Interagency Program Office, product development, and program management. These VistA-related costs totaled approximately $140.2 million. Virtual Lifetime Electronic Record (VLER) Health – This program focused on streamlining the transition of electronic medical information between VA and DOD. These VistA-related costs were associated with product development and program management and totaled approximately $81.2 million. Contracts – Contract costs for VistA Evolution included VHA’s obligations associated with workload management, change management, clinical requirements, and clinical interoperability. These VistA-related costs totaled approximately $202.8 million. Intergovernmental personnel acts – Intergovernmental personnel acts are agreements for the temporary assignment of personnel between the federal, state, and local governments; colleges and universities; Indian tribal governments; federally funded research and development centers; and other eligible organizations. These costs accounted for VHA’s need to use outside experts from approved entities for limited periods of time to work on VistA Evolution assignments. The total VistA-related costs were approximately $2.4 million. Memorandums of understanding – According to VHA, memorandums of understanding are agreements used by the administration to obtain the services of personnel between VA entities for VistA-related activities. These agreements accounted for approximately $2.3 million. Pay – Costs in this category included salaries for VHA staff who worked on VistA-related projects as well as travel, training, and supply costs associated with employment. These costs totaled approximately $34.1 million. However, VA was not able to sufficiently demonstrate the reliability of approximately $1.3 billion in costs related to VistA. Specifically, OIT officials identified the additional legacy VistA costs of $1.3 billion that generally fell into three categories: Legacy VistA: Infrastructure, hosting, and system sustainment – Legacy VistA costs are generally related to the maintenance of fully operational items, such as VistA Imaging and Fileman—two key components related to VistA’s operation. The costs also included obligations for costs related to hosting health data in both VA and non-VA facilities. The OIT officials and subject matter experts estimated these total costs to be approximately $343 million during fiscal years 2015 through 2017. However, we were not able to determine the reliability of these costs because, for example, source data were not well documented; changes in the cost information provided to us during our review indicated that the cost data may not be credible; and subject matter experts were unclear about how to separate VistA costs from non- VistA costs. Related software – Related software costs are associated with the software supporting or closely integrated with VistA that were identified by EHRM officials, yet not tracked directly for one of the VistA-related programs. Both OIT and VHA identified software licensing costs as VistA-related obligations. The EHRM program reported these costs to be approximately $389 million in total during fiscal years 2015 through 2017. However, we were not able to determine the reliability of the costs in this category for a variety of reasons, including that source data were not well documented. In addition, VA officials were not clear regarding how the total amounts in each category should be divided between OIT and VHA. Given this confusion, we were not able to determine if the costs were fully accurate or credible. OIT personnel (pay and administrative) – According to EHRM officials, OIT does not track labor costs by program. Instead, the department provided estimations of the amount of salaries paid to OIT government staff working on activities such as VistA Evolution, program management, and overall support of VistA and related applications. OIT personnel costs were estimated by the EHRM program office to be approximately $544 million total during fiscal years 2015 through 2017. However, we were not able to determine the reliability of costs in this category because assumptions made for estimating the personnel and salary costs were not well documented and could not be verified. VA Omitted Certain Costs from the Total Cost of VistA In addition, VA omitted certain VistA costs from the total costs identified for fiscal years 2015, 2016, and 2017. Specifically, VA omitted the following costs: Additional hosting – OIT officials stated that additional costs related to hosting health data by an outside vendor, as well as hosting backup VistA instances at each of the medical center sites, should also be included in the total costs for VistA; however, VA omitted these costs from the total for fiscal years 2015 through 2017. Specifically, according to the officials, calculating costs for these hosting activities requires subject matter experts to identify equipment, space, utilities, and maintenance costs for resources allocated specifically for VistA. However, the department has not yet developed a methodology to calculate the costs. The officials said they were working on identifying a reliable approach for calculating these costs in the future. Data standardization and testing – OIT officials stated that additional costs related to work on clinical terminology mapping and functional testing were not included in the total costs for VistA for fiscal years 2015 through 2017. This work related to mapping existing clinical data to national standards and making updates to VistA or the Joint Legacy Viewer and included mapping data and building test scripts and reports. OIT officials noted that this work had been critical to the VistA Evolution program, but they did not provide actual cost data in this category. The lack of sufficiently reliable and comprehensive costs indicates that the department is not positioned to accurately report the annual costs to develop and sustain VistA. This is due, in part, to the fact that VA has not followed a well-documented methodology that describes how the department determined the total costs for the system. In lieu of a methodology, OIT officials said that leadership and staff from the program took efforts to identify and track the cost components and contracts associated with the system. However, they noted that costs associated with VistA were not all clearly labeled as VistA in an IT system and it was necessary to estimate other costs. The officials were also unable to verify how VistA-related costs were separated from other department costs in all areas and subject matter experts were not consistently familiar with the estimation methods employed and how VistA was defined for the purposes of calculating costs. Further, VA officials noted that they were still working on the best approach to identifying and calculating omitted costs. Without documenting the methodology for what costs are to be included and how they were identified and calculated, VA’s total does not accurately reflect the development and sustainment costs for VistA. As a result, the department, legislators, and the public do not have the comprehensive, reliable information needed to understand how much it actually cost to develop and maintain the system. Further, VA does not have the reliable information needed to make critical management decisions for sustaining the many versions of VistA over the next 10 years until the Cerner system is fully deployed. VA Has Initiated a Number of Activities to Transition from VistA to the Cerner System VA has initiated a number of actions to prepare for the transition from VistA to the Cerner system. These actions include (1) taking steps to establish a program office reporting to senior agency management, (2) forming a governance structure, (3) conducting assessments at initial system deployment sites, (4) preparing program plans, and (5) setting an initial program baseline. These activities represent important initial steps to prepare for the transition to the new system. The program office is working to hire staff and establish a joint governance structure to coordinate with DOD on the departments’ efforts to implement the Cerner system. VA Has Taken Steps to Establish a Program Office Reporting to Senior Agency Management and Efforts to Hire Staff Are Ongoing Strong agency leadership support is a key factor that can increase the likelihood of a program’s success. For example, senior leadership can define a vision for the program and intervene when there are difficulties. Such leadership can come from the establishment of a program office with staff reporting to senior agency management. VA took steps to establish a program office, under the leadership of the VA Deputy Secretary, to support the contract negotiations between the department and Cerner. Toward this end, in January 2018, the department moved the EHRM program office from OIT to directly report to the VA Deputy Secretary. Then, after the contract with Cerner was awarded in May 2018, a new program office—the Office of Electronic Health Record Modernization—was established in June 2018 to plan and implement the EHRM program. The office is intended to coordinate with OIT and VHA leadership—specifically, VA’s CIO and VHA’s Under Secretary for Health—under the direction of an Executive Director. The Executive Director reports directly to the VA Deputy Secretary. Reporting to the Executive Director is the Deputy Executive Director, whose responsibilities include supporting the program’s execution and management, ensuring the program’s direction is in alignment with VA’s desired outcomes, and identifying strategic challenges related to the program. The Office of Electronic Health Record Modernization also includes three management structures: The Chief Medical Office is responsible for overseeing strategy and planning efforts for change management, user testing and training, and business process re-engineering. It also leads communication efforts for the end users and deployment. The Technology and Integration Office is responsible for providing technical leadership, management, and oversight of IT. As such, the office approves technical requirements and supports interoperability with DOD, as well as performs information security, architecture, data migration and management, configuration management, infrastructure engineering, transition and data engineering, and development. The Program Management Office is responsible for, among other things, providing program control support for the scope, schedule, quality, and risk management for the EHRM program; human resources support for the Office of Electronic Health Record Modernization government staff; financial management for operating plans, budgets, cost estimates and reporting; test and evaluation support; and oversight of contracts providing staffing to the EHRM program. As of May 2019, VA was still working to fully staff the Office of Electronic Health Record Modernization. Figure 1 shows the organization of the Office of Electronic Health Record Modernization. According to program officials and the Office of Electronic Health Record Modernization organization chart, the office is expected to be staffed by 289 government employees. These positions are expected to be filled by April 2020 and represent the staff required for the program to achieve its initial operational capability. According to the program’s January 2019 hiring plan, the office had begun its process to reassign staff and hire additional government employees. VA also awarded a contract for program management support. According to EHRM program officials, the support contractor is to supplement the Office of Electronic Health Record Modernization staff with program and project management support, technical support, community care support, and executive support and internal communications, among other areas. The support contractor provides about 370 personnel to deliver project management support. The contractor reported as of January 2019 that it had achieved the following accomplishments, among others: Developed a Project Readiness Assessment Report including roles, schedules, risk, and measures of success within the Chief Medical Office. Developed a survey to identify key clinical priorities for data migration related to patient safety and clinical quality. Coordinated the site visit schedule and logistics with initial operating capability sites and conducted site surveys at eight outpatient clinics. By establishing a program office reporting to the Deputy Secretary, VA has begun to build a framework to demonstrate senior agency management support of the program. Establishing the program office also focuses oversight and program management of the EHRM program. VA Has Established Program Governance and Is Working on Developing a Joint Management Structure with DOD Implementing collaborative governance brings together key agency executives to discuss investment performance and increases accountability. In addition, it is critical for program officials to be actively engaged with stakeholders to ensure the success of a major acquisition. The department has established a governance structure that includes multiple levels of governance bodies and stakeholders. In addition, VA has prepared charters for the governance boards and identified board membership. According to the charters for the governance bodies, the structure is intended to address technical and functional issues, as well as any joint management issues that arise between VA and DOD as both departments implement the Cerner EHR. As of January 2019, the EHRM program governance structure was comprised of a Steering Committee, Governance Integration Board, Functional Governance Board, Technical Governance Board, and EHR Councils. EHRM program officials have stated that the charters for these boards, which describe their membership and responsibilities, will continue to evolve as the program matures. The Steering Committee, the highest board in the program governance structure, advises the VA Secretary on the progress and performance of the EHRM program toward meeting program goals and outcomes and providing strategic direction on program implementation. This committee is chaired by the Deputy Secretary of VA. Voting members of the committee include, among others, the VA CIO and the Under Secretary for Health. According to the draft charter, the Steering Committee is expected to resolve any items that cannot be resolved at the level of the next lower-level board and is to meet at least quarterly. However, as of January 2019, the Steering Committee had not met. According to program officials, other reviews, such as a monthly program review with the Deputy Secretary, beginning in November 2018, have provided executive-level oversight of the EHRM program and have met the purpose of the Steering Committee. The Governance Integration Board is responsible for integrating and communicating efforts across all lower program governance boards (including the Functional Governance Board and the Technical Governance Board) to meet program goals and milestones. The board has three voting members: the Office of Electronic Health Record Modernization Executive Director, the Assistant Deputy Under Secretary for Health, and the Principal Deputy Assistant Secretary for OIT. According to the charter, this board is expected to act as arbitrator between clinical, technical, and budget priorities and adjudicate items that cannot be resolved at the lower-level boards. In addition, the Governance Integration Board serves as the EHRM program Configuration Control Board. According to the charter, the board is to meet on a monthly basis. According to program officials and meeting minutes, as of January 2019, the Governance Integration Board had met six times. The Functional Governance Board is responsible for providing guidance on the functional and business community needs for the EHR modernization efforts. This board interacts with the Technical Governance Board as a functional and business advisor. The Functional Governance Board is chaired by the program office’s Chief Medical Officer and includes members from a variety of VHA functional areas (e.g., nursing, community care, and patient safety). According to the charter, the board is to meet on a biweekly basis and is to provide guidance to address functional decisions escalated from the EHR Councils. According to program officials and meeting minutes, as of January 2019, the Functional Governance Board had met 10 times. The Technical Governance Board is responsible and accountable for all decisions related to EHRM program technical transformation efforts. The board is expected to provide technical decision recommendations and collaborate with DOD and other external partners. The chair of this board is the Office of Electronic Health Record Modernization’s Chief Technology and Integration Officer. Other voting members include an OIT CIO representative and selected technical directors from within the Office of Electronic Health Record Modernization. The board’s draft charter specifies that it is to meet on a biweekly basis. According to EHRM program officials, as of January 2019, the Technical Governance Board had met 16 times. The EHR Councils are working groups comprised of subject matter experts from both clinical and functional (i.e., business) domains that are to work with Cerner to provide input and recommendations for developing and validating standard workflows. As of October 2018, a total of 12 councils had been established to address clinical processes and six councils had been established to address business processes. A total of 121 VHA field office staff and 100 VHA central office staff were appointed to these councils. In addition, the councils have eight planned national workshops and seven planned local workshops. These workshops are ongoing and are expected to be completed by October 2019. According to program officials, the national workshops are intended to establish a national baseline for workflow configuration decisions. The local workshops are to review the national baseline and make integration decisions to suit local needs. Figure 2 depicts the relationships among VA’s EHRM program governance bodies. In addition to the program’s governance, the Secretaries of VA and DOD issued a joint memorandum in September 2018 asserting the need to establish a joint management structure, which could have responsibilities beyond those currently within the purview of the Interagency Program Office. According to the agency officials, the joint management structure will be expected to leverage lessons learned by DOD from its experience in deploying the Cerner system, such as the timing of infrastructure upgrades. Further, in December 2018, the departments chartered a Joint Electronic Health Record Modernization Work Group to assess the departments’ existing EHR modernization strategies and efforts. According to its charter, the work group is also intended to develop and design recommended approaches, processes, and organizational structures to optimize the use of the departments’ resources in pursuit of EHR interoperability objectives. The joint working group is to develop short- and long-term recommendations to support four objectives to provide: a single accountable authority to facilitate decision-making and an organizational structure to support the delivery of a single, coordinated clinical and business workflows; and a coordinated implementation plan and detailed timelines. According to EHRM program officials, the joint working group is to define the joint management structure to be used to coordinate between the departments. According to the charter, the goal is for the recommended joint organization to be operational by the end of September 2019. VA and Cerner Conducted Site Assessments to Refine the Scope of Work As previously discussed, according to EHRM program officials, the department determined that site-specific assessments are required to allow Cerner to appropriately identify the requirements for system implementation at each site. To refine the scope of work required for initial operating capability, Cerner and the department conducted assessments, beginning in July 2018, at the three sites identified to be part of the initial operating capability of the program. These site assessments included, among other things, an assessment of the IT infrastructure at each site and identification of site-specific requirements. Additional site assessments are planned at every facility before the Cerner system will be deployed at each location. According to the task order, the assessments are expected to provide perspective on the current state of technical and clinical operations of each facility beyond VA’s current documentation. For example, Cerner is expected to document all interfaces with medical devices, third-party systems and other data sets at each site, as well as update monthly a site readiness checklist to inform comprehensive deployment planning. According to the assessments of the three initial operating capability sites, a number of issues have been identified such as updating or replacing infrastructure and workstations to be compatible with the Cerner COTS system. In addition, according to the site assessments, the services offered by the department, such as telehealth and behavioral health, are generally more expansive than commercial deployments and will require increased collaboration between VA and Cerner to meet business and system requirements. Thus, the assessments are intended to position Cerner and the department to have more information readily available in order to better plan for site-specific issues prior to actual implementation. VA Is Preparing Program Plans for Implementation Program planning is critical for ensuring effective management of key aspects of an IT program and serves as the basis for controlling and managing project performance. These key aspects include, for example, identification of the program’s scope, responsible organizations, costs, and schedules. The Office of Electronic Health Record Modernization Executive Director approved an initial Program Management Plan for the EHRM program in November 2018. According to the plan, it is to be used to guide the management of the EHRM program and defines the program’s policies and processes necessary to achieve the program’s goals. It briefly defines the program’s scope and strategy, including the assumptions made. For example, according to the plan, the EHRM program assumes that VA and DOD will use a single instance of the Cerner system. Further, it states that both the legacy VistA data and EHRM data will be available to both VistA and new system users during the transition. The Program Management Plan also identifies a series of subordinate plans that have been developed to further elaborate on specific program planning and execution activities. For example, the plan summarizes the Deployment Management Plan, which details the strategy and tasks required from initial site assessment through configuration, testing, training, change management, deployment, and transition to sustainment. The plan also describes the Schedule Management Plan, which defines the development and maintenance of the integrated master schedule for the life of the program. Thus, the Program Management Plan provides the guidance for where to look for key planning information for the department. The EHRM program also developed a draft Risk Management Plan, dated September 2018, that defines how risk and issue planning, analysis, and management are to be implemented. The draft risk management process consists of risk identification and mitigation, including conducting risk management planning, identification, analysis, response planning, response identification, and monitoring. According to the plan, management of overall program risk is intended to keep risk exposure within an acceptable range and maximize the likelihood of achieving overall objectives. In addition, the EHRM program developed plans for change management, communications, and training activities to ensure that VA clinicians, staff members, volunteers, and veterans understand and are ready for the changing systems and processes that will impact them. The initial versions of the plans were delivered by Cerner in November 2018. The program’s approach is to continue to evolve these plans as the program matures. By developing these program plans, VA is taking steps to ensure effective management of key aspects of the EHRM program. VA Established a Program Baseline for Achieving Initial System Deployments Baselined program plans act as a guide throughout the life of an investment to provide a basis for measuring performance, identify who is accountable for the deliverables, describe the implementation approach and interdependencies, identify key decisions, and embed quality assurance and reviews. Ultimately, baseline management demonstrates that a project is under financial and managerial control. According to EHRM program officials, on October 30, 2018, the program conducted a review of the time period from contract award through initial operating capability. The review validated the scope of the program for the transition of VistA to the initial operating capability sites, identified an initial work breakdown structure, and included an integrated master schedule and a cost baseline. The results of this review established a baseline for the initial operating capability and changes to the baseline are subject to change control. Also, as a result of the review, the Office of Electronic Health Record Modernization is to conduct monthly program reviews to inform the Deputy Secretary of the status of the EHRM program. According to EHRM program officials, upgrades to the IT infrastructure are to be accomplished by OIT, and the local area network infrastructure is to be upgraded at all initial operating capability sites prior to implementation of the new system. As baselined, upgrades of end user devices are scheduled to be completed at the Mann-Grandstaff site by September 2019, the American Lake site by October 2019, and the Seattle site by November 2019. Program officials have stated that the goal is to have infrastructure upgrades at a site completed 6 months before the site begins to implement the Cerner system. However, in May 2019, EHRM program officials indicated that infrastructure updates may be delayed for the initial sites by up to 3 months. After an evaluation of the initial operating capability, the EHRM program is to determine whether the minimum operational capabilities have been achieved. Figure 3 shows a timeline of the baselined implementation milestones for the initial sites, established at the review held in October 2018. The baseline review also included identifying and addressing program risks related to the Cerner system implementation. The review identified 10 program risks, prioritized the risks by probability and impact, and assigned mitigation plans for the risks. For example, the review identified the risk that if required infrastructure upgrades were not implemented, then VA would not be able to deploy a fully operational EHR system. The program identified development of acquisition strategies to address infrastructure requirements from the site assessments as an action to mitigate this risk. By establishing a program baseline for the initial operating capability, VA has instituted a basis for measuring actual versus planned program performance. In addition, the risk mitigation plans provide an approach to address the identified risks. Conclusions VA lacks a comprehensive definition of the VistA system that captures the complexity of the system, the environment in which it operates, and the local customizations that have evolved in the VistA instances over many years. Consequently, VA has engaged in efforts to provide additional insight into the system. The department plans to continue to conduct comprehensive site-specific assessments with Cerner to refine its understanding of the unique VistA instances and the environment in which the system operates. The continuation of planned site assessments should help VA better define VistA. With regard to calculating costs for VistA, the department has identified reliable costs for approximately $1 billion in development and sustainment for the system over 3 fiscal years. However, VA was not able to sufficiently demonstrate the reliability of an additional $1.3 billion of costs identified and omitted other relevant costs from the total. The cost deficiencies existed largely because VA officials were uncertain about what to identify as part of VistA; documentation related to certain categories of costs was incomplete; and a documented methodology for identifying and reporting those costs does not exist. As a result, VA lacks the comprehensive and reliable cost information needed to make critical management decisions for sustaining the system and ensuring an accurate basis for reporting on the return on its investment for replacing VistA. VA has taken a number of actions to prepare for the transition from VistA to the Cerner system, such as establishing and beginning to staff a program office, forming a governance structure, conducting site assessments at initial sites, preparing program plans to guide the initial implementation, and setting an initial program baseline to help guide implementation of the system at three key sites. Recommendation for Executive Action The Secretary of VA should direct the Under Secretary for Health and the Assistant Secretary for Information and Technology/Chief Information Officer to develop and implement a methodology for reliably identifying and reporting the total costs of VistA. The methodology should include steps to identify the definition of VistA and what is to be included in its sustainment activities, as well as ensure that comprehensive costs are corroborated by reliable data. (Recommendation 1) Agency Comments and Our Evaluation VA provided written comments on a draft of this report. In its comments (reprinted in appendix II), the department generally agreed with our conclusions and concurred with our recommendation. The department stated that it will provide the actions it plans to take to address the recommendation within 180 days. VA also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of VA, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions on matters discussed in this report, please contact me at (202) 512-4456 or harriscc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology Our objectives were to: (1) determine the extent to which the Department of Veterans Affairs (VA) has defined the Veterans Health Information Systems and Technology Architecture (VistA), (2) evaluate VA’s annual costs to develop and sustain VistA, and (3) describe the actions VA has taken to transition from VistA to the Cerner system. To address the first objective, we examined VA documentation including the VA Monograph, reports from the VA Systems Inventory, and documents listed in the VA Software Document Library. These documents were cited by VA officials as sources that define the VistA system and provide information on modules and interfaces. Our review and compilation of information from these three sources enabled us to describe the various sources used at the department to document information about the VistA system and identify the limitations of each source. We also examined the VistA Product Roadmap, which described modernization plans and achievements related to VistA. Further, we interviewed officials from the Veterans Health Administration (VHA) to obtain information on additional efforts undertaken by the department to further understand and define VistA. In addition, we reviewed program documentation related to three analyses undertaken by VA to further define VistA. These analyses included the department’s efforts to ascertain variances between versions of VistA, identify components of VistA to be replaced by the Cerner System, and document the current state of a sample instance of VistA. For example, we examined VA documentation that described software modules available in the department’s VistA product and program documentation identifying components of VistA to be replaced by the Cerner system. In addition, our review of a visual mapping developed for Electronic Health Record Modernization (EHRM) program officials depicting the environment in which VistA operates allowed us to describe the size and complexity of the system and how it is used by the department. Further, we compared the extent to which VA has defined VistA with criteria for defining information technology (IT) systems described in GAO’s Standards for Internal Control in the Federal Government and our Cost Estimating and Assessment Guide. In addition, we reviewed EHRM program documentation related to site assessments that have taken place at initial operating capability sites and are planned for future sites. Specifically, we reviewed the relevant contract task order to understand how site assessments were planned and to identify site-specific gaps between the current VistA system in use and the target future Cerner system. We supplemented our documentation reviews with information obtained through interviews with officials from VA’s Office of Information and Technology (OIT), VHA, and the EHRM program office. To address the second objective, we examined department documentation of costs associated with the development and sustainment (operation and maintenance) of VistA for fiscal years 2015, 2016, and 2017. These 3 fiscal years were selected because development and sustainment cost information for full fiscal years should have been available during the time period in which we conducted our evaluation. To compile the total costs, we examined all categories of costs identified by VA to determine reliability of the source data. We also discussed the methodology VA used related to identifying costs and estimating costs when source data was not available with officials from the EHRM program. We compared the identified cost data to best practices described in GAO’s Cost Estimating and Assessment Guide that are the basis for effectively capturing reliable program costs. The guide also describes the importance of documenting the methodology by which costs are included and how they are calculated in detail, step by step, to provide enough information so that someone unfamiliar with the program could easily recreate or update cost calculations. Specifically, we analyzed all cost documentation provided by the department over the course of our work. For example, OIT officials identified VistA costs tracked under three programs—VistA Evolution, Interoperability, and Virtual Lifetime Electronic Record (VLER) Health– and VHA officials reported that costs for the system were tracked separately from OIT through various types of contracts and agreements associated with VistA Evolution. In regard to the OIT and VHA program data, VA provided detailed source data that we analyzed for reliability and verified the calculations of costs identified over the course of our work. We also examined the documentation and controls related to the IT systems VA identified as the source of these cost data. The systems included OIT’s Budget Tracking Tool and VA’s Financial Management System. Further, we discussed the nature of the cost data, the rationale behind why each cost line item was included, and any anomalies found during our analysis with cognizant OIT and VHA officials. For example, anomalies included omitted contract numbers or transposed entries in summary tables. As a result of these efforts, OIT and VHA were able to sufficiently demonstrate the reliability of the program data for the purpose of calculating costs for VistA. Officials from the EHRM program also identified costs that were not directly tracked under the program areas previously mentioned. OIT and VHA relied upon subject matter experts or vendors to identify costs or to calculate estimates for cost categories such as sustainment, maintenance, co-location, hosting, pay, administrative, and infrastructure costs related to VistA operations. We analyzed the data provided for reliability consistent with GAO Cost Estimating and Assessment Guide over the course of our work. Further, we discussed the nature of the cost data, the rationale behind why each cost line item was included, and any anomalies found during our analysis with cognizant OIT and VHA officials. We also interviewed OIT and VHA subject matter experts and vendors identified by VA to examine the rationale or methodology for how the costs were identified and estimated. During the course of our work, VA continued to revise these estimates as part of the department’s efforts to identify the costs for VistA and could not provide a consistent, documented methodology for how the costs were calculated or provided only summary costs that could not be analyzed. As such, VA was not able to sufficiently demonstrate the reliability of legacy VistA, related software, and OIT personnel costs for our purpose of calculating the total costs for VistA. This report does not conclude that the data are unreliable, only that a reliability determination could not be made during the course of our work. However, given the importance of these related costs to VistA, we have summarized and reported these costs in the total cost amount for VistA to more accurately approximate the magnitude of total costs, but have not reported itemized costs in these areas. Finally, the department identified that there were additional costs that should be included in the compilation of the total costs for VistA related to additional hosting costs and data standardization and testing. However, the department did not provide such data to include in the total costs for VistA. To address the third objective, we examined the department’s decision memorandums and charters establishing the Office of Electronic Health Record Modernization and the EHRM program to manage VA’s transition from VistA to Cerner. We also examined the statement of work for the program support contract as well as VA’s draft charters, program briefings, and organization charts that describe plans to govern the program to acquire the Cerner system. Specifically, we examined VA’s plans to establish a structure for governing technical and functional issues and joint decisions that arise with the Department of Defense. To understand how site assessments were used to refine the scope of work, we examined the site assessment task order and the site assessment reports. To understand how the program office plans to manage the program, we examined the EHRM Program Management Plan and subordinate plans that guide the management of the program and describe ongoing efforts to define the policies and processes necessary to achieve the program’s goals. To address the program’s establishment of an initial program baseline, we examined the decision memorandum approving the award of the Cerner contract, the briefings presented to program stakeholders at the initial program baseline review, and the documents supporting the program baseline review. We supplemented our analysis with information obtained through interviews with relevant department officials including the Executive Director and Chief Technology and Integration Officer for the EHRM program. We conducted this performance audit from August 2017 to July 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Veterans Affairs Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Mark Bird (Assistant Director), Jennifer Stavros-Turner (Analyst in Charge), John Bailey, David Blanding, Chris Businsky, Juaná Collymore, Rebecca Eyler, Jacqueline Mai, Scott Pettis, and Charles Youman made key contributions to this report.
Why GAO Did This Study VA provides health care services to approximately 9 million veterans and their families and relies on its health information system—VistA—to do so. However, the system is more than 30 years old, is costly to maintain, and does not fully support exchanging health data with DOD and private health care providers. Over nearly 2 decades, VA has pursued multiple efforts to modernize the system. In June 2017, the department announced plans to acquire the same system—the Cerner system—that DOD is implementing. VA plans to continue using VistA during the decade-long transition to the Cerner system. GAO was asked to review key aspects of VistA and VA's plans for the new acquisition of the Cerner system. The objectives of the review were to (1) determine the extent to which VA has defined VistA, (2) evaluate VA's annual costs to develop and sustain VistA, and (3) describe the actions VA has taken to transition from VistA to the Cerner system. GAO analyzed documentation that defines aspects of VistA and identifies components to be replaced; evaluated the reliability of cost data, including obligations associated with the development and sustainment of VistA for fiscal years 2015, 2016, and 2017; and reviewed program documentation related to VA's program, governance, and plans to transition to Cerner. What GAO Found The Department of Veterans Affairs (VA) has various documents and a database that describe parts of the Veterans Health Information Systems and Technology Architecture (VistA); however, the department does not have a comprehensive definition for the system. For example, VA has identified components that comprise VistA, identified interfaces related to the system, and collected system user guides and installation manuals. VA has also conducted analyses to better understand customization of VistA components at various medical facilities. Nevertheless, the existing information and analyses do not provide a thorough understanding of the local customizations reflected in about 130 versions of VistA that support health care delivery at more than 1,500 sites. Program officials stated that they have not been able to fully define VistA due to the decentralization of the development of the system for more than 30 years. Cerner's contract to provide a new electronic health record system to VA calls for the company to conduct comprehensive assessments to identify site-specific requirements where its system is planned to be deployed. Three site assessments have been completed and additional assessments are planned. If these assessments provide a thorough understanding of the 130 VistA versions, the department should be able to define VistA and be better positioned to transition to the new system. VA identified costs for VistA and its related activities adding up to approximately $913.7 million, $664.3 million, and $711.1 million in fiscal years 2015, 2016, and 2017, respectively—for a total of about $2.3 billion over the 3 years. However, of the $2.3 billion, the department was only able to demonstrate that approximately $1 billion of these costs were sufficiently reliable. In addition, the department omitted VistA-related costs from the total. The lack of a sufficiently reliable and comprehensive total cost for VistA is due in part to not following a well-documented methodology that describes how the department determined the costs for the system. As a result of incomplete cost data and data that could not be determined to be sufficiently reliable, the department, legislators, and the public do not have a complete understanding of how much it has cost to develop and maintain VistA. Further, VA lacks the information needed to make decisions on sustaining the many versions of the system. VA has initiated a number of actions to prepare for the transition from VistA to the Cerner system. These actions include taking steps to establish and begin to staff a program office, forming a governance structure, conducting assessments at the initial sites, preparing program plans to guide the initial system implementation, and setting a program baseline to help guide implementation at the initial sites. The department's actions in these important areas are ongoing. Additional actions are in progress to address GAO's September 2018 recommendation that VA clearly define the role and responsibilities of the joint Department of Defense (DOD) and VA Interagency Program Office in the department's governance plans for the new electronic health record system. VA intends to continue maturing and fully establishing a program management organization and a program governance structure to track program progress. What GAO Recommends GAO is recommending that VA develop and implement a methodology for reliably identifying and reporting the total costs of VistA. VA agreed with the recommendation.
gao_GAO-18-62
gao_GAO-18-62_0
Background Presidential Directives Define DHS’s CI Security Mission In February 2013, the White House released Presidential Policy Directive (PPD)-21, Critical Infrastructure Security and Resilience, directing DHS to coordinate the overall federal effort to promote the security and resilience of the nation’s CI from all hazards. Within DHS, NPPD has been delegated the responsibility for the security and resilience of the nation’s CI, and within NPPD, the Office of Infrastructure Protection (IP) leads and coordinates national programs and policies on CI issues. Also in February 2013, the President issued Executive Order 13636, “Improving Critical Infrastructure Cybersecurity,” citing repeated cyber intrusions into critical infrastructure as demonstrating the need for improved cybersecurity. Among other things, the order addressed the need to improve cybersecurity information sharing and collaboratively develop risk-based standards; stated U.S. policy to increase the volume, timeliness, and quality of cyber threat information shared with private sector entities; directed the federal government to develop a technology- neutral cybersecurity framework to help CI owners and operators identify, assess, and manage cyber risk; and required DHS to use a consultative process to identify infrastructure in which a cybersecurity incident could result in catastrophic consequences. The National Infrastructure Protection Plan Provides a Framework for Managing Risk The NIPP sets forth a risk management framework and outlines DHS’s roles and responsibilities regarding CI security and resilience. As shown in Figure 1, the NIPP risk management framework is a planning methodology that outlines the process for setting goals and objectives; identifying assets, systems, and networks; assessing risk; implementing protective programs and resiliency strategies; and measuring performance and taking corrective action. The risk management framework calls for public and private CI partners to conduct risk assessments to understand the most likely and severe incidents that could affect their operations and communities, and use this information to support planning and resource allocation in a coordinated manner. According to the NIPP, the risk management framework is also intended to inform how decision makers take actions to manage risk, which according to DHS, is influenced by the nature and magnitude of a threat, the vulnerabilities to that threat, and the consequences that could result, as shown in figure 2. Multiple DHS Offices Are Involved in CI Risk Assessment Activities Multiple DHS offices conduct or assist with risk assessments for CI, including the Office of Cybersecurity and Communications (CS&C), Office of Infrastructure Protection, and Office of Cyber and Infrastructure Analysis (OCIA). The Office of Infrastructure Protection and CS&C both use voluntary programs to introduce risk-related tools intended to identify gaps in infrastructure security. These include voluntary security surveys and vulnerability assessments carried out by DHS’s Protective Security Advisors (PSA) and Cyber Security Advisors (CSA). PSAs are CI protection and security specialists responsible for assisting asset owners and operators with protection strategies of physical assets, and CSAs are cybersecurity specialists responsible for helping to bolster owners’ and operators’ cyber assessment capabilities. Both types of advisors use their respective assessment tools to work with CI stakeholders to develop measures intended to make assets more resilient. Other DHS offices with CI risk assessment responsibilities include DHS’s Office of Intelligence and Analysis, U.S. Coast Guard, and TSA. PPD-21 and the NIPP also call for other federal departments and agencies to play a key role in CI security and resilience activities in their capacity as SSAs. In general, an SSA is a federal department or agency responsible for, among other things, supporting the security and resilience programs and related activities of designated CI sectors. DHS is designated as the SSA or co-SSA for 10 of the 16 CI sectors, and has assigned its SSA duties to multiple entities including the Office of Infrastructure Protection, TSA, Coast Guard, and Federal Protective Service. For our three selected sectors, DHS’s Sector Outreach and Programs Division (SOPD), within the Office of Infrastructure Protection, serves as the SSA for the Critical Manufacturing and nuclear sectors. DHS’s TSA and the U.S. Department of Transportation serve as co-SSAs for the Transportation Systems sector. Other federal agencies or departments external to DHS serve as the SSAs for the remaining 6 sectors for which DHS is not designated as the SSA or co-SSA. Figure 3 provides descriptions of the 16 sectors, identifies the SSA of each sector, and highlights the three selected sectors. Risk Assessment Activities Vary Based on Sector’s Regulatory Environment For some sectors, assets or operations are regulated by federal or state regulatory agencies that possess unique insight into the risk mitigation strategies of the CI they oversee. These regulators, who may not serve as the designated SSA for the sector, help establish safety and security protocols for the industries they regulate and ensure sector resilience through the policymaking and oversight processes. For example, the Nuclear Regulatory Commission, in its role as the regulatory agency for the nuclear sector, conducts threat assessments to help protect against acts of radiological sabotage and to prevent the theft of special nuclear material. Additionally, pursuant to the Maritime Transportation Security Act of 2002, DHS must use risk management in specific aspects of its homeland security efforts. For example, the Coast Guard and other port security stakeholders are required to carry out certain risk-based tasks, including assessing risks and developing security plans for ports, facilities, and vessels. NIST Framework Provides Voluntary Cybersecurity Guidance DHS is also involved in promoting and supporting the adoption of the NIST Framework for Improving Critical Infrastructure Cybersecurity. In accordance with requirements in Executive Order 13636, as discussed above, this framework provides voluntary standards and procedures for CI organizations to follow to better manage and reduce cybersecurity risk, and is designed to foster communication among CI stakeholders about cybersecurity management. In December 2015, we reported that SSAs and NIST had promoted and supported adoption of the cybersecurity framework in the CI sectors. For example, in February 2014, DHS established the Critical Infrastructure Cyber Community Voluntary Program to encourage adoption of the framework and has undertaken multiple efforts as part of this program. These include developing guidance and tools that are intended to help sector entities use the framework. We also reported that DHS did not have metrics to measure the success of these program efforts, and recommended that DHS develop metrics to understand the effectiveness of their promotion activities. DHS concurred, and in December 2016 DHS officials stated that they plan to continue to work with SSA partners and NIST to determine how to develop measurement activities and collect information on the voluntary program’s outreach and its effectiveness in promoting and supporting the cybersecurity framework. We are currently conducting a review that will identify actions taken by relevant federal entities including NIST, DHS, and other SSAs to promote the adoption of the cybersecurity framework. We will continue to monitor the voluntary program’s outreach as well as DHS’s efforts to measure its effectiveness in promoting and supporting the cybersecurity framework. Efforts to Increase Operational Efficiency among CI Assets Result in Physical and Cyber Security Convergence and Expand the Potential for Cyberattacks The convergence of physical and cyber security is a major challenge for owners and operators of CI as more physical processes and systems are connected to Internet-enabled networks to improve operational efficiency, according to DHS officials. For example, facilities may make use of automated building control systems to control certain processes or functions, such as security, lighting, or heating, ventilation, and air conditioning (HVAC). These control systems increase efficiency and optimize operational performance by reducing the need for manual controls and adjustments. Building control systems and the devices within them are often configured with connections to the Internet. These Internet connections allow the systems to be accessed remotely for control and monitoring and, for example, to receive software patches and updates. Figure 4 illustrates how a facility’s HVAC and security systems are managed through a building automation system and operated over a control network. In this example, the information systems and networks are protected by a firewall—a cybersecurity countermeasure—while the control network and its devices have direct Internet connectivity without going through a firewall, potentially allowing a cyber-attacker to control the building’s electronic door locks. Broader examples of these types of networked systems include electrical grids and water distribution systems, as well as control systems that operate chemical manufacturing processes, monitor natural gas pipelines, and control petroleum refineries. Depending on the cyberattack, there is potential to cause a disruption to specific infrastructure operations and a possibility that such an event could lead to cascading effects within the sector or to other sectors in the economy. According to a 2015 DHS report on cyber-physical infrastructure risks, greater connectivity among technologies that connect cyber systems to physical systems expands the potential for cyberattack by malicious actors. The growing convergence of these systems mean that exploited cyber vulnerabilities can result in physical consequences, as well. DHS Primarily Assesses the Three Elements of Risk Separately for CI, and Private Sector Representatives from Selected Sectors Report Threat Information Most Valuable DHS primarily assesses the three elements of risk–threat, vulnerability, and consequence–separately for individual CI assets and sectors. According to DHS officials, these assessments help critical infrastructure owners and operators take actions to improve security and mitigate risks. However, according to SCC representatives from three selected sectors, timely and actionable threat assessment data is the most useful type of risk information. In limited circumstances, DHS generates risk assessments that collectively incorporate all three elements of risk which selected SCC representatives found of limited use for their sectors’ infrastructure protection efforts due to the amount of time it takes to finalize the assessment data, the inclusion of risk scenarios that are not likely to occur, and the results not being applicable to individual assets. DHS Shares Threat Assessment Information with CI Owners and Operators Threat Information Products Help Make Critical Infrastructure in Selected Sectors More Secure and Resilient DHS’s Office of Intelligence and Analysis (I&A) compiles information from a variety of classified and unclassified sources to develop threat-related analytic products for each of the 16 CI sectors. I&A’s threat assessment efforts include classified briefings intended to help CI owners and operators manage risks to their individual operations and assets, and to determine effective strategies to make them more secure and resilient. DHS typically shares these products via its Homeland Security Information Network for Critical Infrastructure (HSIN-CI) platform. I&A also partners with sector-specific agencies to engage asset owners and operators directly during biweekly classified threat briefings to share threat data. During these meetings, both I&A officials and CI owners and operators take this opportunity to identify potential threat-related risks that may inform future I&A threat products. The Homeland Security Information Network–Critical Infrastructure (HSIN-CI) HSIN-CI is the Department of Homeland Security’s (DHS) information sharing platform and collaboration tool for critical infrastructure stakeholders. It is the primary system through which private sector owners and operators, DHS, and other federal, state, and local government agencies collaborate to protect CI. According to DHS, it is an unclassified, web-based communications system for sharing sensitive but unclassified information. Users can access protection alerts, information bulletins, incident reports, situational updates, and analyses. Users can also engage in secure discussions with sector peer groups. Other features include CI protection training, planning and preparedness information, and a document library. Similarly, TSA’s Office of Intelligence (TSA-OI) receives intelligence information regarding threats to transportation-related assets and disseminates it to industry officials with transportation responsibilities, as well as to other federal, state, and local officials. TSA-OI disseminates security information through products including reports, assessments, and briefings. For example, TSA-OI, in conjunction with I&A and the Federal Bureau of Investigation, provides intelligence and security information to mass transit and passenger rail security directors, law enforcement chiefs in major metropolitan areas, and Amtrak officials through joint classified intelligence and analysis briefings. Although it is not an intelligence generator, TSA-OI receives and assesses intelligence from within and outside of the intelligence community to determine its relevance to transportation security. Sources of information outside the intelligence community include other DHS components, law enforcement agencies, and owners and operators of transportation systems. TSA-OI also reviews suspicious activity reporting by Transportation Security Officers, Behavior Detection Officers, and Federal Air Marshals. DHS officials from IP and TSA told us that they also share threat information within their respective sectors. For example, as the Critical Manufacturing SSA, IP disseminates threat information to sector stakeholders daily. Officials from IP also hold quarterly threat briefings to alert stakeholders of relevant threats. TSA likewise shares transportation security related information, including details on threats, vulnerabilities, and suspicious activities, with Transportation Systems sector stakeholders through unclassified or classified products and briefings. For example, TSA provides Transportation Intelligence Notes to transportation security partners to offer additional information or analysis on a specific topic and also provides situational awareness of ongoing or recent incidents. Table 1 in appendix I summarizes DHS threat assessment activities and products provided to the three selected sectors. Examples of Threat Information the Department of Homeland Security Provides to Critical Infrastructure Owners and Operators Classified Threat Briefings: Officials from the Office of Intelligence and Analysis and the sector-specific agencies participate in briefings at regular intervals with critical infrastructure owners and operators to share threat information gathered from intelligence sources. Incident-Specific Outreach: The Nuclear Reactors, Materials and Waste sector-specific agency hosts incident-specific meetings and calls for sector stakeholders. Daily Threat Briefings: DHS publishes a daily e-mail that contains threat information intended to provide situational awareness from a variety of sources including the Federal Emergency Management Agency, Department of Justice, and other stakeholders as appropriate. According to DHS, these emails are distributed to more than 140 recipients in the Critical Manufacturing sector. NCCIC Established to Share Cyber Threat Information According to DHS, the NCCIC is a 24x7 cyber situational awareness, incident response, and management center. The center shares information among public and private sector partners to build awareness of cyber vulnerabilities, incidents, and mitigation strategies and its partners include other government agencies, the private sector, and international entities. The NCCIC works with the private sector by integrating (both physically and virtually) CI owners and operators into the center’s operations so that, during an incident, threat information can be aggregated and communicated between government and appropriate private sector partners in an efficient manner. The NCCIC manages several programs that provide data used in developing 43 products and services in support of its 11 statutorily required cybersecurity functions. The programs include monitoring network traffic entering and exiting federal agency networks and analyzing computer network vulnerabilities and threats. The products and services are provided to its customers in the private sector; federal, state, local, tribal, and territorial government entities; and other partner organizations. For example, the NCCIC issues indicator bulletins, which can contain information related to cyber threat indicators, defensive measures, and cybersecurity risks and incidents. A list of these products and services is summarized in table 5 in appendix II. As of September 2017, 199 private sector CI owners and operators had as-needed access to NCCIC through their participation in the Cyber Information Sharing and Collaboration Program (CISCP). The National Cybersecurity and Communications Integration Center (NCCIC) The Department of Homeland Security’s (DHS) NCCIC serves as a central location where partners involved in cybersecurity and communications protection coordinate and synchronize their efforts. NCCIC's partners include other government agencies, the private sector, and international entities. According the DHS, working closely with its partners, NCCIC analyzes cybersecurity and communications information, shares timely and actionable information, and coordinates response, mitigation, and recovery efforts. The NCCIC is made up of four branches: NCCIC Operations and Integration; United States Computer Emergency Readiness Team; Industrial Control Systems Cyber Emergency Response Team; and National Coordinating Center for Communications. In February 2017, we reported that the NCCIC had taken steps to perform each of its 11 statutorily required cybersecurity functions, such as being a federal civilian interface for sharing cybersecurity-related information with federal and nonfederal entities. However, we recommended nine actions to DHS for enhancing the effectiveness and efficiency of the NCCIC, including determining the applicability of the implementing principles and establishing metrics and methods for evaluating performance. DHS concurred with our recommendations and we will monitor DHS’s progress toward addressing them. Selected Private Sector Representatives Reported Threat Data as Most Useful Risk Information SCC representatives we spoke to from the three selected sectors cited threat assessment data as generally the most useful risk information for CI owners and operators. Each of these six representatives indicated that threat information must be distributed rapidly to owners and operators in order to maintain its value and utility. Three of the six representatives reported that DHS generally provides threat information in a timely manner. For example, SCC representatives from the nuclear sector told us that timely threat information from DHS was helpful in clarifying erroneous reports circulating about the terror attacks in Belgium being aimed at nuclear sites in that region. According to these SCC representatives, working with DHS to gather credible information in a timely fashion was very valuable to their sector because it allowed owners and operators within their sector to determine whether they needed to implement certain protocols to ensure that they were not vulnerable to similar attacks. The remaining three representatives told us that delays in receiving threat information from DHS decreased the value of this information. For example, one representative noted that he believes DHS’s process for vetting threat information before it is shared with his sector prevents the agency from disseminating valuable threat information in a timely manner. Another representative shared an example where the threats referenced in one of the products distributed by DHS had already been identified and addressed. However, the sixth representative emphasized that despite delays in receiving information from DHS, government threat information is very credible and a major resource often used by security managers proposing security upgrades to their respective chief executive officers. This representative also highlighted the significance of TSA’s adoption of industry-defined intelligence priorities as directly supporting training and awareness initiatives to create opportunities for prevention. The NIPP establishes that the government is to provide the private sector with access to timely and actionable information in response to developing threats and crises. Similarly, the sector-specific plans from each of three selected sectors emphasize reliance upon timely and actionable threat information. For example, the 2015 Transportation System’s sector-specific plan discusses the importance of an effective and efficient process for receiving, analyzing, and disseminating pertinent and timely threat information and states that effective protection or response to a potential hazard relies on providing the stakeholders at greatest risk with real-time or near real-time alerts of emerging or breaking events. According to one SCC representative, threat information is the one element of risk that adds the most value because it allows owners and operators to react immediately to improve their security posture to mitigate the effects of any potential hazards. The representative added that specific products like TSA-OI’s annual country-specific threat assessments are particularly useful because a number of companies within his sector have business interests outside the U.S. and these reports help them stay abreast of potential threats abroad. Three of the six SCC representatives we interviewed reported that information regarding cybersecurity threats has become increasingly important. One SCC representative from the Critical Manufacturing sector stated that many of the security managers within his sector are physical security experts who are now facing more and more questions related to cybersecurity threats as a result of the cyber and physical security convergence their companies are experiencing. Therefore, the Critical Manufacturing sector worked with federal partners to increase access to the NCCIC, FBI, and U.S. Secret Service for additional cybersecurity support and also began promoting the sector’s awareness and use of the NIST framework. DHS Conducts Voluntary Physical and Cyber Vulnerability Assessments for CI Infrastructure Survey Tool The Infrastructure Survey Tool (IST) is one of the Department of Homeland Security’s (DHS) voluntary vulnerability assessment tools available to Critical Infrastructure owners and operators. It is a web-based security survey conducted by a Protective Security Advisor in coordination with facility owners and operators to identify the overall security and resilience of a facility. The survey contains more than 100 questions used to gather information on such things as physical security, security forces, security management, information sharing, and protective measures. The IST results inform owners and operators of potential vulnerabilities facing their asset or system and recommend measures to mitigate those vulnerabilities. Facility owners access results and preview the effects of proposed mitigation measures through the interactive IST Dashboard. NPPD helps CI owners and operators develop capabilities to mitigate vulnerabilities by conducting voluntary physical vulnerability assessments primarily by using PSAs to conduct voluntary vulnerability assessments in coordination with owners and operators. These assessments focus on physical infrastructure and are generally asset-specific and conducted during site visits at individual assets. They are used to identify security vulnerabilities and identify potential risk mitigation strategies for owners and operators to address over time. One tool PSAs use in conducting CI assessments is the Infrastructure Survey Tool to assess facilities that agree to voluntarily participate. According to NPPD officials, vulnerability assessments take longer to develop than threat assessments, and the vulnerabilities identified are typically more static than threats, which are constantly evolving. PSAs store the collected assessment data on DHS’s Infrastructure Protection Gateway, an information sharing platform intended for use by DHS and its homeland security partners, including CI owners and operators, for access to infrastructure protection tools and information in support of incident preparedness and response efforts. Table 2 in appendix I summarizes the physical vulnerability assessments DHS conducts for the three selected sectors. In September 2014, we reported that the vulnerability assessment tools and methods that different DHS offices and components used varied with respect to the areas of vulnerability assessed. For example, we found that while all of the assessment tools we reviewed considered perimeter security, approximately half of these tools (6 of 10) included an assessment of cybersecurity. We also found that DHS had not established guidance on what areas should be included in a vulnerability assessment. We recommended, among other things, that DHS review its vulnerability assessments to identify the most important areas of vulnerability to be assessed, and establish guidance. DHS agreed with our recommendation and in July 2016 reported that IP had taken steps to collect and evaluate information on the various vulnerability assessment tools and methods used by DHS offices and components. More specifically, IP identified six security areas to incorporate into DHS assessment tools and methods. DHS reported in August 2016 that DHS offices and components received guidance for the areas and the specified levels of detail to be incorporated into existing assessment tools. As a result of addressing this recommendation, we believe that DHS is better positioned to collect and analyze assessment data to enable comparisons and determine priorities between and across CI sectors. DHS is also taking additional steps to address related recommendations from our September 2014 report that remain open. For example, we recommended that DHS develop and implement ways it can facilitate data sharing and coordination of vulnerability assessments to minimize the risk of potential duplication or gaps in coverage. As of September 2017, in response to this recommendation, DHS officials reported they were coordinating with stakeholders and developing features in an online portal to better facilitate information vulnerability assessment data sharing. We will continue to monitor the status of DHS’s efforts to address these recommendations. In addition, in July 2017, DHS officials reported that they were finalizing a strategy intended to identify ways that vulnerability assessment data can be used by not only CI owners and operators but DHS and other government stakeholders to improve their own decision-making. According to these officials, DHS held workshops with over 120 stakeholders from NPPD as well as senior officials from other designated sector-specific agencies and federal departments who identified the need for DHS to provide more vulnerability assessment data related to lifeline facilities—such as water and wastewater treatment plants and train stations. They also noted that stakeholders recommended that DHS use the vulnerability assessment data it collects to conduct trend analysis in specific CI sectors and geographic regions. The Cyber Resilience Review The Cyber Resilience Review is one of the Department of Homeland Security’s (DHS) cyber vulnerability assessments available to critical infrastructure owners and operators. It is a voluntary, nontechnical assessment to evaluate an organization’s operational resilience and cybersecurity practices. It may be conducted as a self-assessment or as an on-site assessment facilitated by DHS cybersecurity Cyber Security Advisors. It assesses enterprise programs and practices across 10 domains: asset management, controls management, configuration and change management, vulnerability management, incident management, service continuity management, risk management, external dependency management, training and awareness, and situational awareness. DHS Offers Voluntary Cyber Vulnerability Assessments for CI Owners and Operators The Office of Cybersecurity and Communications (CS&C) offers CI owners and operators a suite of voluntary vulnerability assessments aimed at securing their cyber systems. For example, CS&C’s Industrial Control Systems Cyber Emergency Response Team (ICS-CERT) is responsible for taking steps to help mitigate vulnerabilities to computer- based systems that are used to monitor and control industrial processes. CS&C also maintains the National Cybersecurity Assessment and Technical Services team which offers cybersecurity scanning and testing services that identify vulnerabilities within stakeholder networks and provides risk analysis and remediation recommendations. The CSA program also provides cyber assessment services for CI owners and operators through on-site vulnerability assessments for cyber systems. CSAs offer the Cyber Infrastructure Survey Tool, an assessment of essential cybersecurity practices instituted by critical infrastructure organizations to protect their critical IT services as well as the Cyber Resilience Review which evaluates an organization’s operational resilience and cybersecurity practices. A summary of DHS’s critical infrastructure cyber vulnerability assessment efforts can be found in table 3 in appendix I. Selected Private Sector Representatives View Asset-Specific Vulnerability Assessments As Useful Sector Coordinating Council representatives from two of the three selected sectors stated that DHS’s vulnerability assessment efforts were useful for determining vulnerabilities for individual CI owners and operators, but their opinions varied concerning the usefulness of aggregating sector-wide data and sharing broadly among private sector stakeholders. For example, one SCC representative told us that the risk scores associated with individual vulnerability assessments are of value to the CI owners and operators of the infrastructure for which that assessment was administered. However, this representative also mentioned that these scores have limited value beyond the individual asset because risks differ greatly between companies, rendering sector- wide or regional vulnerability assessments less useful. Another SCC representative told us that because the membership of their respective sectors is so broad and diverse, it is difficult for members to discern the value of high-level aggregated vulnerability data––especially from organizations with very different business models. However, another SCC representative indicated that DHS could offer aggregated vulnerability assessment data to all CI stakeholders for the purpose of developing broader situational awareness. DHS Conducts Consequence Assessments as Part of Its Infrastructure Survey Tool While DHS’s IST is used to assess vulnerabilities for CI, the tool also includes a consequence module intended to allow DHS to assess facility criticality in terms of potential loss of life and economic impact. Also, OCIA analyzes consequence from incidents, and models past events to better understand the effect of these disruptions on assets and predict consequences of future events. Table 4 in appendix I describes the DHS components and corresponding products and activities associated with consequence assessments. DHS officials we spoke with stated that consequence information is important to owners and operators. These officials added that DHS needs to demonstrate that potential losses can be avoided by owners’ and operators’ investment in risk mitigation, thereby reducing the overall consequence of a potential incident on the CI owner’s operations and the nation. Three of the six SCC representatives we interviewed shared that consequence information was not useful. For example, one SCC representative noted that consequence information is not very useful for owners and operators because timely threat information combined with knowledge of an asset’s vulnerabilities put owners and operators in a better position to mitigate potential incidents and, subsequently, any associated consequences. DHS officials acknowledged that a range of perspectives concerning the usefulness of consequence information exists and stated that these differences reflect the array of owner and operator views about how to use risk information for different risk management decisions. DHS Conducts Complete Risk Assessments for CI Sectors on a Limited Basis Within DHS, NPPD, TSA, and the Coast Guard are responsible for developing complete risk assessments, which can be conducted for an entire CI sector or multiple sub-sectors within a CI sector. Both TSA and the Coast Guard regularly conduct complete risk assessments within the Transportation Systems sector. However, according to a senior OCIA official, NPPD receives very few requests for complete risk assessments. Our review of available assessment documentation found that among our three selected sectors, DHS has conducted complete risk assessments for the Transportation Systems sector but not the other two sectors. For example, the Transportation Systems Sector Security Risk Assessment is TSA’s annual report to Congress on transportation security. It assesses risk by establishing risk scores for various attack scenarios within the sector, including for domestic aviation; examines risks to individual transportation modes; and compares them to risks within and across modes. Table 6 in appendix III describes the assessment in more detail. Also within the Transportation Systems sector, the Coast Guard’s Maritime Security Risk Analysis Model (MSRAM) serves as the primary tool for assessing and managing security risks for all of the vessels, barges, and facilities regulated by the Coast Guard under the Maritime Transportation Security Act of 2002. Since its development and implementation in 2005, MSRAM has provided the Coast Guard with a standardized way of assessing risk to maritime infrastructure, referred to in the analysis model as targets that can include chemical facilities, oil refineries, hazardous cargo vessels, passenger ferries, and cruise ship terminals. For example, a scenario related to cruise ships identified using this analysis model could include a boat bomb or an attack by a hijacked vessel. MSRAM is designed to allow comparison between different targets at the local, regional, and national levels with the goal of reducing risk by prioritizing security activities and resources. To prioritize and assess security risks at U.S. ports and facilities, the Coast Guard uses MSRAM to calculate risk using threat judgments provided by the Coast Guard Intelligence Coordination Center. The Center provides threat probabilities for MSRAM based upon judgments regarding specific intent, capability, and geographic preference of terrorist organizations to deliver an attack on a specific type of maritime target class—for example, a boat bomb attack on a ferry terminal. To make these judgments, Center officials use intelligence reports generated throughout the broader intelligence community to make qualitative determinations about certain terrorist organizations and the threat they pose to the maritime domain. At the sector level, Coast Guard MSRAM users are required to use the threat probabilities provided by the Center to ensure that threat information is consistently applied across ports. MSRAM users at the sector level also assess the vulnerability of targets within their respective areas of responsibility and assess the consequences of a successful attack on these targets. Vulnerability and consequence factors included in the MSRAM assessment can be found in table 7 in appendix III. According to one NPPD official, various sector councils have requested analysis of certain risk elements, such as vulnerabilities or consequences, as opposed to complete risk assessments. For example, councils have asked for analysis of vulnerabilities and consequences due to potential failures within their sector’s respective systems and the potentially cascading effects of these failures on systems beyond their own span of control. This official noted that these requests provide the opportunity for OCIA to develop analytic products that companies within these sectors can then use as part of the risk assessments they conduct for themselves, as well as analytic products more broadly related to homeland security risks. SCC representatives from our three selected sectors told us that complete risk assessments are of limited utility for CI owners and operators because complete assessments take a long time to produce, often involve risk scenarios that are not likely to occur, or generates results that are so broad that they may not be applicable to individual assets. For example, according to one SCC representative, the diversity among the members of his sector, including size and sophistication of operations, is the primary reason that conducting a complete risk assessment for their sector would not be helpful for individual companies. Similarly, another SCC representative told us that the private sector does not operationalize information from complete risk assessments because the assessments do not add practical value and some of the scenarios evaluated in the assessments are not applicable to many of the companies within their sector. TSA and NPPD officials provided explanations of the utility of complete risk assessments, particularly for government decision-making purposes. For example, TSA officials told us that they believe the Transportation Systems Sector Security Risk Assessment data gathering methodology for identifying risk inputs adds the most value in the assessment process for CI owners and operators in the Transportation Systems sector. According to these officials, the data gathering process is extensive and involves a substantial number of industry experts who are brought together to analyze potential threats, vulnerabilities, and consequences across the five transportation modes for which TSA is responsible. The officials added that this elicited risk information allows TSA to better allocate resources across the multiple transportation modes. According to one senior OCIA official, NPPD is best suited to execute complete risk assessments that are intended to focus on broad risks to CI and are not specific to individual CI assets. For example, NPPD is providing risk information for the execution of the 2018 Homeland Security National Risk Characterization (HSNRC), which evaluates the full range of risks addressed by DHS. This official stated that their office is working with DHS’s Office of Policy to maximize the value of the insights gained from the HSNRC effort and using it to inform NPPD decisions about strategy and policy. DHS Uses CI Risk Information to Inform Strategic Planning and Guide Outreach to Owners and Operators DHS uses CI risk information in multiple ways, including informing strategic planning and developing analytic products, and at the component level to guide its day-to-day owner and operator outreach and incident response. DHS is also facilitating risk-based cross-sector planning and information sharing through sector coordinating councils. DHS Uses CI Risk Information to Inform Its Strategic Planning and is Taking Actions to Improve Supporting Risk Analysis According to DHS Office of Policy officials, DHS is using risk information to inform departmental strategic planning as part of its third QHSR. The QHSR is DHS’s process for updating the national homeland security strategy, identifying critical homeland security missions, and assessing the organizational alignment of DHS with the homeland security strategy and missions. The results of the QHSR are used in DHS’s Strategic Plan, which outlines how DHS plans to implement the QHSR homeland security goals, lists strategies to achieve these goals, and identifies performance measures to track progress towards these goals. The QHSR incorporates multiple sources of risk information, including the HSNRC. The HSNRC assesses natural hazards such as floods, and manmade hazards such as terrorism. According to Office of Policy and NPPD officials, NPPD provides a broad range of risk-related inputs to support the implementation of the HSNRC risk assessment methodology. These inputs provide DHS officials a better understanding of risks to CI during strategic planning, according to Office of Policy officials. Our prior work on DHS’s QHSR found that DHS assessed homeland security risks for its second QHSR for fiscal years 2014 to 2018 by considering threats, vulnerabilities, and consequences. We also found that while the QHSR risk assessment described a wide range of homeland security challenges and was a valuable step toward using risk information to prioritize and select risk management activities, DHS did not document how its various analyses were synthesized to generate results, thus limiting the reproducibility and defensibility of the results. We found that without sufficient documentation, the QHSR risk assessment results could not easily be validated or the assumptions tested, hindering DHS’s ability to improve future assessments. In addition, the QHSR described homeland security hazards, but did not rank those hazards or provide prioritized strategies to address them. We reported that comparing and prioritizing risks helps identify where risk mitigation is most needed and helps justify cost-effective risk management options. Thus, we recommended that future QHSR risk assessment reflect key elements of successful risk assessment methodologies, including being documented, reproducible, and defensible. We also recommended that DHS refine its risk assessment methodology so that in future QHSRs it can compare and prioritize homeland security risks and risk mitigation strategies. DHS concurred with these recommendations and outlined steps it planned to address them. In response to our recommendations, DHS officials described several steps they have taken to address our recommendations. According to these officials, the Office of Policy held initial meetings with government and nongovernment subject matter experts after the release of our report to refine the HSNRC. Also, according to these officials, a Departmental Risk Modeling and Analysis Steering Committee (Risk committee) was convened in June 2016 to review and approve proposed new methodologies to help identify and prioritize threats and hazards for the HSNRC. According to NPPD officials, NPPD proposed updates to the HSNRC process as part of the Risk committee proceedings, such as changing the scope and detail of the assessment. The Risk committee evaluated these requests and finalized proposals for use in the third QHSR, which is scheduled to be released in 2018. We will continue to monitor the status of DHS’s actions to address our recommendations and how they are implemented. DHS’s Office of Infrastructure Protection Uses CI Risk Information to Inform Outreach to Owners and Operators and Incident Response According to IP officials, PSAs use risk information to guide their outreach to CI owners and operators. PSAs use the National Critical Infrastructure Prioritization Program (NCIPP) list––which prioritizes CI assets into different levels according to their criticality––to inform their outreach to owners and operators. PSAs and their leadership use the NCIPP list to prioritize outreach to owners and operators across each level of assets within their area of jurisdiction for participation in DHS’s voluntary security survey and vulnerability assessment programs, as shown in figure 5. Generally, PSAs engage CI owners and operators in the order in the pyramid shown in figure 5, starting with Level 1. According to IP officials, PSAs also use risk information to guide incident response. The officials explained that when an incident occurs, they pull information from a variety of sources, including the database of assets on the NCIPP list, to identify CI in the affected area. OCIA officials then prioritize this information into a list to guide incident response efforts. For example, when Hurricane Hermine approached Georgia in September 2016, PSAs received a list from OCIA that categorized potentially affected CI assets in the region into priority levels. The PSAs used the list to prioritize their outreach to the highest priority assets. Officials from the CSA program, also plan to use risk information to guide cybersecurity outreach to CI owners and operators. According to CS&C officials, CSAs are currently able to meet resource demands for outreach with little or no delay. However, as the CSA program continues to expand, CSAs plan to use a risk-based methodology to prioritize outreach. This methodology considers cyber threats, vulnerabilities, and consequences to determine how and where CSAs are used, according to CS&C officials. DHS SSA representatives for our three selected sectors also use risk information to guide their outreach to CI owners and operators. For example, in response to a physical threat to a nuclear facility in Brussels, Belgium, nuclear sector SSA officials engaged with private sector representatives on the SCC and discussed ways to improve their information-sharing process. In another example, Critical Manufacturing SSA officials determined that smaller businesses in their sector did not have business continuity plans. According to these SSA officials, this was a risk that could disrupt the operations of these small businesses and other businesses in their supply chain. SSA officials developed a tool to help Critical Manufacturing sector owners and operators develop their own continuity plans––including templates, tabletop exercises, and a self- directed risk assessment for private sector owners and operators to use. According to the Critical Manufacturing sector-specific plan, the expanded use of business continuity planning will enhance the resilience of the Critical Manufacturing Sector. DHS Facilitates Sharing of Cross-Sector Risk Information through Coordinating Councils and Planning Documents As part of DHS’s responsibility described in the NIPP, DHS created the Critical Infrastructure Partnership Advisory Council (CIPAC), a forum for stakeholders including government officials and asset owners and operators, to facilitate planning and information sharing. CIPAC membership consists of representatives from the government and sector coordinating councils—federal, state, and local agency officials, and private owners and operators, respectively—who work together to coordinate strategies, activities, and policies across governmental entities within each of the 16 CI sectors. There is also a Critical Infrastructure Cross-Sector Council comprised of SCC chairs and vice chairs from each of the 16 sectors that meets quarterly to discuss, among other things, details about risks and opportunities to share information across sectors. Additionally, this Critical Infrastructure Cross-Sector Council provides a forum for the leaders of the SCCs to provide senior-level, cross-sector strategic coordination with DHS. The chairperson of the cross-sector council also communicates with owners and operators across the sectors as situations arise. For example, the chairperson convened a teleconference within 24 hours of a recent terror attack in the United Kingdom to share information and answer questions about potential risks or lessons learned for CI owners and operators. In addition, DHS engages private sector owners and operators in cross- sector discussions through sector planning documents. For example, the 2015 sector-specific plans for each of the three sectors we studied include descriptions of cross-sector interdependencies. These include summaries of lifeline functions––such as energy, water, communications, and transportation systems––which are essential to the operations of most CI partners and communities. During development of the 2015 sector-specific plans the sectors and SSAs also collaborated and identified emerging risks that spanned across multiple sectors, as shown in figure 6. Agency and Third Party Comments We provided a draft of this product to DHS for review and comment. DHS provided technical comments, which we incorporated as appropriate. We also provided draft excerpts of this product to the selected sector coordinating council representatives we interviewed, who provided technical comments that we also incorporated as appropriate. We are sending copies of this report to interested congressional committees and the Secretary of Homeland Security. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (404) 679-1875 or CurrieC@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. Appendix I: Selected Risk Information Products and Activities Distributed by the Department of Homeland Security The following tables highlight threat, vulnerability and consequence products and activities developed by the Department of Homeland Security for the purpose of providing risk information to critical infrastructure owners and operators. Appendix II: NCCIC Cybersecurity Products and Services Table 5 below highlights the cybersecurity products and services that the National Cybersecurity and Communications Integration Center (NCCIC) reported providing to its customers in fiscal years 2015 and 2016. Appendix III: Summary of Department of Homeland Security Complete Risk Assessments for Critical Infrastructure The following tables highlight complete risk assessments regularly conducted by the Transportation Security Administration and the U.S. Coast Guard within the Transportation Systems sector. Appendix IV: National Critical Infrastructure Prioritization Program Consequence-Based Criteria and Relative Thresholds Figure 7 below illustrates the Department of Homeland Security’s (DHS) approach for prioritizing the list of systems and assets that the Secretary of Homeland Security determines would, if destroyed or disrupted, cause national or regional catastrophic effects. DHS has prioritized these CI assets into different levels according to their criticality, to inform their outreach to owners and operators. Consistent with the National Infrastructure Protection Plan risk management framework, the criteria for determining which level each asset is assigned to on the National Critical Infrastructure Prioritization Program (NCIPP) list are entirely consequence based thresholds and include fatalities, economic loss, mass evacuation length, or national security impacts. Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Ben Atwater (Assistant Director) and Landis Lindsey (Analyst-in-Charge) managed this audit engagement. Chuck Bausell, Michele Fejfar, Daniel Glickstein, Tracey King, Steve Komadina, Tom Lombardi, Kush Malhotra, Gabrielle Matuzsan, and Claire Peachey made significant contributions to this report.
Why GAO Did This Study The nation's critical infrastructure includes cyber and physical assets and systems across 16 different sectors whose security and resilience are vital to the nation. The majority of critical infrastructure is owned and operated by the private sector. Multiple federal entities, including DHS, work with infrastructure owners and operators to assess their risks. GAO was asked to review DHS's risk assessment practices for critical infrastructure. This report describes:(1) DHS's risk assessment practices in 3 of 16 critical infrastructure sectors and private sector representatives' views on the utility of this risk information, and (2) how this risk information influences DHS's strategic planning and private sector outreach. GAO selected 3 of 16 sectors–Critical Manufacturing; Nuclear Reactors, Materials, and Waste; and Transportation Systems–to examine based on their varied regulatory structures and industries. GAO reviewed DHS guidance related to infrastructure protection, the QHSR and DHS Strategic Plan, and plans for the selected critical infrastructure sectors. GAO interviewed DHS officials responsible for critical infrastructure risk assessments, and the owner and operator representatives who serve as chairs and vice-chairs of coordinating councils for the 3 selected sectors. Information from the 3 sectors is not generalizable to all 16 sectors but provides insight into DHS's risk management practices. GAO provided a draft of this report to DHS and relevant excerpts to the council representatives interviewed during this review. Technical comments provided were incorporated as appropriate. What GAO Found The Department of Homeland Security (DHS) primarily conducts assessments for each of the three elements of risk—threat, vulnerability, and consequence—for critical infrastructures from the three sectors GAO reviewed—Critical Manufacturing; Nuclear Reactors, Materials, and Waste; and Transportation Systems. In limited circumstances, DHS generates risk assessments that both incorporate all three elements of risk and cover individual or multiple subsectors. Threat : DHS's Office of Intelligence and Analysis assesses threats—natural or manmade occurrences, entities, or actions with the potential to cause harm, including terrorist attacks and cyberattacks—and disseminates this information to critical infrastructure owners and operators. For example, the Transportation Security Administration provides threat intelligence to mass transit security directors and others through joint classified briefings. Vulnerability : DHS officials provide various tools and work directly with owners and operators to assess asset and facility vulnerabilities—physical features or operational attributes that render an asset open to exploitation, including gates, perimeter fences, and computer networks. For example, DHS officials conduct voluntary, asset-specific vulnerability assessments that focus on physical infrastructure during individual site visits. Consequence : DHS officials also assess consequence— the effect of occurrences like terrorist attacks or hurricanes resulting in losses that impact areas such as public health and safety, and the economy—to better understand the effect of these disruptions on assets. These assessments help critical infrastructure owners and operators take actions to improve security and mitigate risks. Six private sector representatives told GAO that threat information is the most useful type of risk information because it allows owners and operators to react immediately to improve their security posture. For example, one official from the Transportation Systems sector said that government threat information is credible and is critical in supporting security recommendations to company decision-makers. DHS uses the results of its risk assessments to inform the department's strategic planning and to guide outreach to infrastructure owners and operators. Critical infrastructure risk information is considered within DHS's strategic planning. Specifically, according to DHS officials, risk information informs the Department's Quadrennial Homeland Security Review (QHSR)—a process that identifies DHS's critical homeland security missions and its strategy for meeting them. DHS also uses risk information to guide outreach to critical infrastructure owners and operators. For example, DHS officials annually prioritize the most critical assets and facilities nationwide and categorize them based on the severity of the estimated consequences of a significant disruption to the asset or facility. DHS officials then use the results to target their assessment outreach to the infrastructure owners and operators categorized as higher risk. DHS officials also told GAO that they use risk information after an incident, such as a natural disaster, to quickly identify and prioritize affected infrastructure owners and operators to help focus their response and recovery assistance outreach.
gao_GAO-18-460T
gao_GAO-18-460T_0
Background According to the President’s budget, the federal government plans to invest more than $96 billion for IT in fiscal year 2018—the largest amount ever budgeted. However, as we have previously reported, investments in federal IT too often result in failed projects that incur cost overruns and schedule slippages, while contributing little to the desired mission-related outcomes. For example: The Department of Veterans Affairs’ Scheduling Replacement Project was terminated in September 2009 after spending an estimated $127 million over 9 years. The tri-agency National Polar-orbiting Operational Environmental Satellite System was disbanded in February 2010 by the White House’s Office of Science and Technology Policy after the program spent 16 years and almost $5 billion. The Department of Homeland Security’s Secure Border Initiative Network program was ended in January 2011, after the department obligated more than $1 billion for the program. The Office of Personnel Management’s Retirement Systems Modernization program was canceled in February 2011, after the agency had spent approximately $231 million on its third attempt to automate the processing of federal employee retirement claims. The Department of Veterans Affairs’ Financial and Logistics Integrated Technology Enterprise program was intended to be delivered by 2014 at a total estimated cost of $609 million, but was terminated in October 2011. The Department of Defense’s Expeditionary Combat Support System was canceled in December 2012 after spending more than a billion dollars and failing to deploy within 5 years of initially obligating funds. Our past work found that these and other failed IT projects often suffered from a lack of disciplined and effective management, such as project planning, requirements definition, and program oversight and governance. In many instances, agencies had not consistently applied best practices that are critical to successfully acquiring IT. Such projects have also failed due to a lack of oversight and governance. Executive-level governance and oversight across the government has often been ineffective, specifically from chief information officers (CIO). For example, we have reported that some CIOs’ roles were limited because they did not have the authority to review and approve the entire agency IT portfolio. Implementing FITARA Can Improve Agencies’ Management of IT FITARA was intended to improve covered agencies’ acquisitions of IT and enable Congress to monitor agencies’ progress and hold them accountable for reducing duplication and achieving cost savings. The law includes specific requirements related to seven areas. Federal data center consolidation initiative (FDCCI). Agencies covered by FITARA are required to provide OMB with a data center inventory, a strategy for consolidating and optimizing their data centers (to include planned cost savings), and quarterly updates on progress made. The law also requires OMB to develop a goal for how much is to be saved through this initiative, and provide annual reports on cost savings achieved. Enhanced transparency and improved risk management. OMB and covered agencies are to make detailed information on federal IT investments publicly available, and agency CIOs are to categorize their investments by level of risk. Additionally, in the case of major IT investments rated as high risk for 4 consecutive quarters, the law requires that the agency CIO and the investment’s program manager conduct a review aimed at identifying and addressing the causes of the risk. Agency CIO authority enhancements. Agency heads at covered agencies are required to ensure that CIOs have authority to (1) approve the IT budget requests of their respective agencies, (2) certify that OMB’s incremental development guidance is being adequately implemented for IT investments, (3) review and approve contracts for IT, and (4) approve the appointment of other agency employees with the title of CIO. Portfolio review. Covered agencies are to annually review IT investment portfolios in order to, among other things, increase efficiency and effectiveness and identify potential waste and duplication. In establishing the process associated with such portfolio reviews, the law requires OMB to develop standardized performance metrics, to include cost savings, and to submit quarterly reports to Congress on cost savings. Expansion of training and use of IT acquisition cadres. Covered agencies are to update their acquisition human capital plans to address supporting the timely and effective acquisition of IT. In doing so, the law calls for agencies to consider, among other things, establishing IT acquisition cadres or developing agreements with other agencies that have such cadres. Government-wide software purchasing program. The General Services Administration is to develop a strategic sourcing initiative to enhance government-wide acquisition and management of software. In doing so, the law requires that, to the maximum extent practicable, the General Services Administration should allow for the purchase of a software license agreement that is available for use by all executive branch agencies as a single user. Maximizing the benefit of the Federal Strategic Sourcing Initiative. Federal agencies are required to compare their purchases of services and supplies to what is offered under the Federal Strategic Sourcing Initiative. The Administrator for Federal Procurement Policy was also required to issue regulations related to the initiative. In June 2015, OMB released guidance describing how agencies are to implement FITARA. This guidance is intended to, among other things: assist agencies in aligning their IT resources with statutory establish government-wide IT management controls that will meet the law’s requirements, while providing agencies with flexibility to adapt to unique agency processes and requirements; strengthen the relationship between agency CIOs and bureau CIOs; and strengthen CIO accountability for IT costs, schedules, performance, and security. The guidance identified several actions that agencies were to take to establish a basic set of roles and responsibilities (referred to as the common baseline) for CIOs and other senior agency officials, which were needed to implement the authorities described in the law. For example, agencies were required to conduct a self-assessment and submit a plan describing the changes they intended to make to ensure that common baseline responsibilities were implemented. Agencies were to submit their plans to OMB’s Office of E-Government and Information Technology by August 15, 2015, and make portions of the plans publicly available on agency websites no later than 30 days after OMB approval. As of November 2016, all agencies had made their plans publicly available. In addition, in August 2016, OMB released guidance intended to, among other things, define a framework for achieving the data center consolidation and optimization requirements of FITARA. The guidance requires each agency on a quarterly basis to: maintain complete inventories of all data center facilities owned, operated, or maintained by or on behalf of the agency; develop cost savings targets for fiscal years 2016 through 2018 and report any actual realized cost savings; and measure progress toward meeting optimization metrics. The guidance also directs agencies to develop a data center consolidation and optimization strategic plan that defines the agency’s data center strategy for fiscal years 2016, 2017, and 2018. This strategy is to include, among other things, a statement from the agency CIO indicating whether the agency has complied with all data center reporting requirements in FITARA. Further, the guidance indicates that OMB is to maintain a public dashboard that will display consolidation-related costs savings and optimization performance information for the agencies. IT Acquisitions and Operations Identified by GAO as a High-Risk Area In February 2015, we introduced a new government-wide high-risk area, Improving the Management of IT Acquisitions and Operations. This area highlighted several critical IT initiatives in need of additional congressional oversight, including (1) reviews of troubled projects; (2) efforts to increase the use of incremental development; (3) efforts to provide transparency relative to the cost, schedule, and risk levels for major IT investments; (4) reviews of agencies’ operational investments; (5) data center consolidation; and (6) efforts to streamline agencies’ portfolios of IT investments. We noted that implementation of these initiatives was inconsistent and more work remained to demonstrate progress in achieving IT acquisition and operation outcomes. Further, our February 2015 high-risk report stated that, beyond implementing FITARA, OMB and agencies needed to continue to implement our prior recommendations in order to improve their ability to effectively and efficiently invest in IT. Specifically, from fiscal years 2010 through 2015, we made 803 recommendations to OMB and federal agencies to address shortcomings in IT acquisitions and operations. These recommendations included many to improve the implementation of the aforementioned six critical IT initiatives and other government-wide, cross-cutting efforts. We stressed that OMB and agencies should demonstrate government-wide progress in the management of IT investments by, among other things, implementing at least 80 percent of our recommendations related to managing IT acquisitions and operations within 4 years. In February 2017, we issued an update to our high-risk series and reported that, while progress had been made in improving the management of IT acquisitions and operations, significant work still remained to be completed. For example, as of March 2018, OMB and agencies had fully implemented 476 (or about 59 percent) of the 803 recommendations. Figure 1 summarizes the progress that OMB and agencies have made in addressing our recommendations as compared to the 80 percent target, as of March 2018. In addition, in fiscal year 2016, we made 202 new recommendations, thus further reinforcing the need for OMB and agencies to address the shortcomings in IT acquisitions and operations. Also, beyond addressing our prior recommendations, our 2017 high-risk update noted the importance of OMB and covered federal agencies continuing to expeditiously implement the requirements of FITARA. To further explore the challenges and opportunities to improve federal IT acquisitions and operations, we convened a forum on September 14, 2016, to explore challenges and opportunities for CIOs to improve federal IT acquisitions and operations—with the goal of better informing policymakers and government leadership. Forum participants, which included 13 current and former federal agency CIOs, members of Congress, and private sector IT executives, identified key actions related to seven topics: (1) strengthening FITARA, (2) improving CIO authorities, (3) budget formulation, (4) governance, (5) workforce, (6) operations, and (7) transition planning. A summary of the key actions, by topic area, identified during the forum is provided in figure 2. In addition, in January 2017, the Federal CIO Council concluded that differing levels of authority over IT-related investments and spending have led to inconsistencies in how IT is executed from agency to agency. According to the Council, for those agencies where the CIO has broad authority to manage all IT investments, great progress has been made to streamline and modernize the federal agency’s footprint. For the others, where agency CIOs are only able to control pieces of the total IT footprint, it has been harder to achieve improvements. Congress Has Taken Action to Continue Selected FITARA Provisions and Modernize Federal IT Congress has recognized the importance of covered agencies’ continued implementation of FITARA provisions, and has taken legislative action to extend selected provisions beyond their original dates of expiration. Specifically, Congress and the President enacted laws to: remove the expiration date for enhanced transparency and improved risk management provisions, which were set to expire in 2019; remove the expiration date for portfolio review, which was set to expire in 2019; extend the expiration date for FDCCI from 2018 to 2020; and authorize the availability of funding mechanisms to help further agencies’ efforts to modernize IT. In particular, a law was enacted to authorize the availability of funding to help further agencies’ efforts to modernize IT. The law, known as the Modernizing Government Technology (MGT) Act, authorizes agencies to establish working capital funds for use in transitioning from legacy IT systems, as well as for addressing evolving threats to information security. The law creates a technology modernization fund within the Department of the Treasury, from which agencies can “borrow” money to retire and replace legacy systems as well as acquire or develop systems. The Current Administration Has Undertaken Efforts to Improve Federal IT The current administration has initiated additional efforts aimed at improving federal IT, including digital services. Specifically, in March 2017, the administration established the Office of American Innovation, which has a mission to, among other things, make recommendations to the President on policies and plans aimed at improving federal government operations and services. In doing so, the office is to consult with both OMB and the Office of Science and Technology Policy on policies and plans intended to improve government operations and services, improve the quality of life for Americans, and spur job creation. In May 2017, the administration also established the American Technology Council, which has a goal of helping to transform and modernize federal agency IT and how the federal government uses and delivers digital services. The President is the chairman of this council, and the Federal CIO and the United States Digital Service Administrator are among the members. In addition, on May 11, 2017, the President signed Executive Order 13800, Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure. This Executive Order tasked the Director of American Technology Council to coordinate a report to the President from the Secretary of the Department of Homeland Security, the Director of OMB, and the Administrator of the General Services Administration, in consultation with the Secretary of Commerce, regarding the modernization of federal IT. As a result, the Report to the President on Federal IT Modernization was issued on December 13, 2017, and outlined the current and envisioned state of federal IT. The report recognized that agencies have attempted to modernize systems but have been stymied by a variety of factors, including resource prioritization, ability to procure services quickly, and technical issues. The report provided multiple recommendations intended to address these issues through the modernization and consolidation of networks and the use of shared services to enable future network architectures. In February 2018, OMB issued guidance for agencies to implement the MGT Act. The guidance was intended to provide agencies additional information regarding the Technology Management Fund, and the administration and funding of the related IT Working Capital Funds. Specifically, the guidance allowed agencies to begin submitting initial project proposals for modernization on February 27, 2018. In addition, in accord with the MGT Act, the guidance provides details of the Technology Modernization Board, which is to consist of (1) the Federal CIO; (2) a senior official from the General Services Administration; (3) a member of the Department of Homeland Security’s National Protection and Program Directorate; and (4) four federal employees with technical expertise in IT development, financial management, cyber security and privacy, and acquisition, appointed by the Director of OMB. Agencies Can Improve IT Acquisitions and Operations Agencies have taken steps to improve the management of IT acquisitions and operations. However, agencies would be better positioned to realize billions in cost savings and additional management improvements, if they addressed the numerous recommendations we have made aimed at improving data center consolidation, increasing transparency via OMB’s IT Dashboard, implementing incremental development, managing software licenses, reviewing IT acquisitions, implementing key IT workforce activities, and addressing aging legacy systems. Agencies Have Made Progress in Consolidating Data Centers, but Need to Take Action to Achieve Planned Cost Savings One of the key initiatives to implement FITARA is data center consolidation. OMB established FDCCI in February 2010 to improve the efficiency, performance, and environmental footprint of federal data center activities, and the enactment of FITARA codified and expanded the initiative. However, in a series of reports that we issued from July 2011 through August 2017, we noted that, while data center consolidation could potentially save the federal government billions of dollars, weaknesses existed in several areas, including agencies’ data center consolidation plans, data center optimization, and OMB’s tracking and reporting on related cost savings. In these reports, we made a matter for Congressional consideration, and a total of 160 recommendations to OMB and 24 agencies to improve the execution and oversight of the initiative. Most agencies and OMB agreed with our recommendations or had no comments. As of March 2018, 83 of these recommendations remained open. For example, in May 2017, we reported that the 24 agencies participating in FDCCI collectively had made progress on their data center closure efforts. Specifically, as of August 2016, these agencies had identified a total of 9,995 data centers, of which they reported having closed 4,388, and having plans to close a total of 5,597 data centers through fiscal year 2019. Notably, the Departments of Agriculture, Defense, the Interior, and the Treasury accounted for 84 percent of the completed closures. In addition, that report noted that 18 of the 24 agencies had reported achieving about $2.3 billion collectively in cost savings and avoidances from their data center consolidation and optimization efforts from fiscal year 2012 through August 2016. The Departments of Commerce, Defense, Homeland Security, and the Treasury accounted for approximately $2.0 billion (or 87 percent) of the total. Further, 23 agencies reported about $656 million collectively in planned savings for fiscal years 2016 through 2018. This is about $3.3 billion less than the estimated $4.0 billion in planned savings for fiscal years 2016 through 2018 that agencies reported to us in November 2015. Figure 3 presents a comparison of the amounts of cost savings and avoidances reported by agencies to OMB and the amounts the agencies reported to us. As mentioned previously, FITARA required agencies to submit no later than the end of fiscal year 2016 and annually thereafter multi-year strategies to achieve the consolidation and optimization of their data centers. Among other things, this strategy is required to include such information as data center consolidation and optimization metrics, and year-by-year calculations of investments and cost savings through October 1, 2020. Further, OMB’s August 2016 guidance on data center optimization contained additional information for how agencies are to implement the strategic plan requirements of FITARA, and stated that agencies were required to publicly post their strategic plans to their agency-owned digital strategy websites by September 30, 2016. As of April 2017, only 7 of the 23 agencies that submitted their strategic plans—the Departments of Agriculture, Education, Homeland Security, and Housing and Urban Development; the General Services Administration; the National Science Foundation; and the Office of Personnel Management—had addressed all five elements required by the OMB memorandum implementing FITARA. The remaining 16 agencies either partially met or did not meet the requirements. For example, most agencies partially met or did not meet the requirements to provide information related to data center closures and cost savings metrics. The Department of Defense did not submit a plan and was rated as not meeting any of the requirements. To better ensure that federal data center consolidation and optimization efforts improve governmental efficiency and achieve cost savings, in our May 2017 report, we recommended that 11 of the 24 agencies take actions to ensure that the amounts of achieved data center cost savings and avoidances are consistent across all reporting mechanisms. We also recommended that 17 of the 24 agencies each take action to complete missing elements in their strategic plans and submit their plans to OMB in order to optimize their data centers and achieve cost savings. Twelve agencies agreed with our recommendations, 2 did not agree, and 10 agencies and OMB did not state whether they agreed or disagreed. More recently, in August 2017, we reported that agencies needed to address challenges in optimizing their data centers in order to achieve cost savings. Specifically, we noted that, according to the 24 agencies’ data center consolidation initiative strategic plans as of April 2017, most agencies were not planning to meet OMB’s optimization targets by the end of fiscal year 2018. Further, of the 24 agencies, 5—the Department of Commerce and the Environmental Protection Agency, National Science Foundation, Small Business Administration, and U.S. Agency for International Development—reported plans to fully meet their applicable targets by the end of fiscal year 2018; 13 reported plans to meet some, but not all, of the targets; 4 reported that they did not plan to meet any targets; and 2 did not have a basis to report planned optimization milestones because they do not report having any agency-owned data centers. Figure 4 summarizes agencies’ progress in meeting OMB’s optimization targets as of February 2017, and planned progress to be achieved by September 2017 and September 2018, as of April 2017. FITARA required OMB to establish a data center optimization metric specific to measuring server efficiency, and required agencies to report on progress in meeting this metric. To effectively measure progress against this metric, OMB directed agencies to replace the manual collection and reporting of systems, software, and hardware inventory housed within agency-owned data centers with automated monitoring tools and to complete this effort no later than the end of fiscal year 2018. Agencies are required to report progress in implementing automated monitoring tools and server utilization averages at each data center as part of their quarterly data center inventory reporting to OMB. As of February 2017, 4 of the 22 agencies reporting agency-owned data centers in their inventory—the National Aeronautics and Space Administration, National Science Foundation, Social Security Administration, and U.S. Agency for International Development—reported that they had implemented automated monitoring tools at all of their data centers. Further, 10 reported that they had implemented automated monitoring tools at between 1 and 57 percent of their centers, and 8 had not yet begun to report the implementation of these tools. In total, the 22 agencies reported that automated tools were implemented at 123 (or about 3 percent) of the 4,528 total agency-owned data centers, while the remaining 4,405 (or about 97 percent) of these data centers were not reported as having these tools implemented. Figure 5 summarizes the number of agency-reported data centers with automated monitoring tools implemented, including the number of tiered and non-tiered centers. To address challenges in optimizing federal data centers, in our August 2017 report, we made recommendations to 18 agencies and OMB. Ten agencies agreed with our recommendations, three agencies partially agreed, and six (including OMB) did not state whether they agreed or disagreed. Risks Need to Be Fully Considered When Agencies Rate Their Major Investments on OMB’s IT Dashboard To facilitate transparency across the government in acquiring and managing IT investments, OMB established a public website—the IT Dashboard—to provide detailed information on major investments at 26 agencies, including ratings of their performance against cost and schedule targets. Among other things, agencies are to submit ratings from their CIOs, which, according to OMB’s instructions, should reflect the level of risk facing an investment relative to that investment’s ability to accomplish its goals. In this regard, FITARA includes a requirement for covered agency CIOs to categorize their major IT investment risks in accordance with OMB guidance. Over the past 6 years, we have issued a series of reports about the Dashboard that noted both significant steps OMB has taken to enhance the oversight, transparency, and accountability of federal IT investments by creating its Dashboard, as well as concerns about the accuracy and reliability of the data. In total, we have made 47 recommendations to OMB and federal agencies to help improve the accuracy and reliability of the information on the Dashboard and to increase its availability. Most agencies agreed with our recommendations or had no comments. As of March 2018, 19 recommendations remained open. In June 2016, we determined that 13 of the 15 agencies selected for in- depth review had not fully considered risks when rating their major investments on the Dashboard. Specifically, our assessments of risk for 95 investments at the 15 selected agencies matched the CIO ratings posted on the Dashboard 22 times, showed more risk 60 times, and showed less risk 13 times. Figure 6 summarizes how our assessments compared to the selected investments’ CIO ratings. Aside from the inherently judgmental nature of risk ratings, we identified three factors which contributed to differences between our assessments and the CIO ratings: Forty of the 95 CIO ratings were not updated during April 2015 (the month we conducted our review), which led to differences between our assessments and the CIOs’ ratings. This underscores the importance of frequent rating updates, which help to ensure that the information on the Dashboard is timely and accurately reflects recent changes to investment status. Three agencies’ rating processes spanned longer than 1 month. Longer processes mean that CIO ratings are based on older data, and may not reflect the current level of investment risk. Seven agencies’ rating processes did not focus on active risks. According to OMB’s guidance, CIO ratings should reflect the CIO’s assessment of the risk and the investment’s ability to accomplish its goals. CIO ratings that do no incorporate active risks increase the chance that ratings overstate the likelihood of investment success. As a result, we concluded that the associated risk rating processes used by the 15 agencies were generally understating the level of an investment’s risk, raising the likelihood that critical federal investments in IT are not receiving the appropriate levels of oversight. To better ensure that the Dashboard ratings more accurately reflect risk, we made 25 recommendations to 15 agencies to improve the quality and frequency of their CIO ratings. Twelve agencies generally agreed with or did not comment on the recommendations and three agencies disagreed, stating that their CIO ratings were adequate. However, we noted that weaknesses in these three agencies’ processes still existed and that we continued to believe our recommendations were appropriate. Agencies Need to Increase Their Use of Incremental Development Practices OMB has emphasized the need to deliver investments in smaller parts, or increments, in order to reduce risk, deliver capabilities more quickly, and facilitate the adoption of emerging technologies. In 2010, it called for agencies’ major investments to deliver functionality every 12 months and, since 2012, every 6 months. Subsequently, FITARA codified a requirement that covered agency CIOs certify that IT investments are adequately implementing incremental development, as defined in the capital planning guidance issued by OMB. Further, subsequent OMB guidance on the law’s implementation, issued in June 2015, directed agency CIOs to define processes and policies for their agencies which ensure that they certify that IT resources are adequately implementing incremental development. However, in May 2014, we reported that 66 of 89 selected investments at five major agencies did not plan to deliver capabilities in 6-month cycles, and less than half of these investments planned to deliver functionality in 12-month cycles. We also reported that only one of the five agencies had complete incremental development policies. Accordingly, we recommended that OMB clarify its guidance on incremental development and that the selected agencies update their associated policies to comply with OMB’s revised guidance (once made available), and consider the factors identified in our report when doing so. Four of the six agencies agreed with our recommendations or had no comments, one agency partially agreed, and the remaining agency disagreed with the recommendations. The agency that disagreed did not believe that its recommendations should be dependent upon OMB taking action to update guidance. In response, we noted that only one of the recommendations to that agency depended upon OMB action, and we maintained that the action was warranted and could be implemented. Subsequently, in August 2016, we reported that agencies had not fully implemented incremental development practices for their software development projects. Specifically, we noted that, as of August 31, 2015, 22 federal agencies had reported on the Dashboard that 300 of 469 active software development projects (64 percent) were planning to deliver usable functionality every 6 months for fiscal year 2016, as required by OMB guidance. The remaining 169 projects (or 36 percent) that were reported as not planning to deliver functionality every 6 months, agencies provided a variety of explanations for not achieving that goal. These included project complexity, the lack of an established project release schedule, or that the project was not a software development project. Further, in conducting an in-depth review of seven selected agencies’ software development projects, we determined that 129 out of 287 software development projects delivered functionality every 6 months for fiscal year 2015 (45 percent) and 113 out of 206 software projects (55 percent) planned to do so in fiscal year 2016. However, significant differences existed between the delivery rates that the agencies reported to us and what they reported on the Dashboard. For example, for four agencies (the Departments of Commerce, Education, Health and Human Services, and the Treasury), the percentage of delivery reported to us was at least 10 percentage points lower than what was reported on the Dashboard. These differences were due to (1) our identification of fewer software development projects than agencies reported on the Dashboard and (2) the fact that information reported to us was generally more current than the information reported on the Dashboard. We concluded that, by not having up-to-date information on the Dashboard about whether the project is a software development project and about the extent to which projects are delivering functionality, these seven agencies were at risk that OMB and key stakeholders may make decisions regarding the agencies’ investments without the most current and accurate information. As such, we recommended that the seven selected agencies review major IT investment project data reported on the Dashboard and update the information as appropriate, ensuring that these data are consistent across all reporting channels. Finally, while OMB has issued guidance requiring agency CIOs to certify that each major IT investment’s plan for the current year adequately implements incremental development, only three agencies (the Departments of Commerce, Homeland Security, and Transportation) had defined processes and policies intended to ensure that the CIOs certify that major IT investments are adequately implementing incremental development. Accordingly, we recommended that the remaining four agencies—the Departments of Defense, Education, Health and Human Services, and the Treasury—establish policies and processes for certifying that major IT investments adequately use incremental development. The Departments of Education and Health and Human Services agreed with our recommendation, while the Department of Defense disagreed and stated that its existing policies address the use of incremental development. However, we noted that the department’s policies did not comply with OMB’s guidance and that we continued to believe our recommendation was appropriate. The Department of the Treasury did not comment on its recommendation. More recently, in November 2017, we reported that agencies needed to improve their certification of incremental development. Specifically, agencies reported that 103 of 166 major IT software development investments (62 percent) were certified by the agency CIO for implementing adequate incremental development in fiscal year 2017, as required by FITARA as of August 2016. Table 1 identifies the number of federal agency major IT software development investments certified for adequate incremental development, as reported on the IT Dashboard for fiscal year 2017. Officials from 21 of the 24 agencies in our review reported that challenges hindered their ability to implement incremental development, which included: (1) inefficient governance processes; (2) procurement delays; and (3) organizational changes associated with transitioning from a traditional software methodology that takes years to deliver a product, to incremental development, which delivers products in shorter time frames. Nevertheless, 21 agencies reported that the certification process was beneficial because they used the information from the process to assist with identifying investments that could more effectively use an incremental approach, and used lessons learned to improve the agencies’ incremental processes. In addition, as of August 2017, only 4 of the 24 agencies had clearly defined CIO incremental development certification policies and processes that contained descriptions of the role of the CIO in the process and how the CIO’s certification will be documented; and included definitions of incremental development and time frames for delivering functionality consistent with OMB guidance. Figure 7 summarizes our analysis of agencies’ policies for CIO certification of the adequate use of incremental development in IT investments. Lastly, we reported that OMB’s capital planning guidance for fiscal year 2018 (issued in June 2016) lacked clarity regarding how agencies were to address the requirement for certifying adequate incremental development. While the 2018 guidance stated that agency CIOs are to provide the certifications needed to demonstrate compliance with FITARA, the guidance did not include a specific reference to the provision requiring CIO certification of adequate incremental development. We noted that, as a result of this change, OMB placed the burden on agencies to know and understand how to demonstrate compliance with FITARA’s incremental development provision. Further, because of the lack of clarity in the guidance as to what agencies were to provide, OMB could not demonstrate how the fiscal year 2018 guidance ensured that agencies provided the certifications specifically called for in the law. In August 2017, OMB issued its fiscal year 2019 guidance, which addressed the weaknesses we identified in the previous fiscal year’s guidance. Specifically, the revised guidance requires agency CIOs to make an explicit statement regarding the extent to which the CIO is able to certify the use of incremental development, and to include a copy of that statement in the agency’s public congressional budget justification materials. As part of the statement, an agency CIO must also identify which specific bureaus or offices are using incremental development on all of their investments. In our November 2017 report, we made 19 recommendations to 17 agencies to improve reporting and certification of incremental development. Eleven agencies agreed with our recommendations, 1 partially agreed, and 5 did not state whether they agreed or disagreed. OMB disagreed with several of our conclusions, which we continued to believe were valid. In total, from May 2014 through November 2017, we made 42 recommendations to OMB and agencies to improve their implementation of incremental development. As of March 2018, 34 of our recommendations remained open. Agencies Need to Better Manage Software Licenses to Achieve Savings Federal agencies engage in thousands of software licensing agreements annually. The objective of software license management is to manage, control, and protect an organization’s software assets. Effective management of these licenses can help avoid purchasing too many licenses, which can result in unused software, as well as too few licenses, which can result in noncompliance with license terms and cause the imposition of additional fees. As part of its PortfolioStat initiative, OMB has developed policy that addresses software licenses. This policy requires agencies to conduct an annual, agency-wide IT portfolio review to, among other things, reduce commodity IT spending. Such areas of spending could include software licenses. In May 2014, we reported on federal agencies’ management of software licenses and determined that better management was needed to achieve significant savings government-wide. In particular, 22 of the 24 major agencies did not have comprehensive license policies and only 2 had comprehensive license inventories. In addition, we identified five leading software license management practices, and the agencies’ implementation of these practices varied. As a result of agencies’ mixed management of software licensing, agencies’ oversight of software license spending was limited or lacking, thus potentially leading to missed savings. However, the potential savings could be significant considering that, in fiscal year 2012, 1 major federal agency reported saving approximately $181 million by consolidating its enterprise license agreements, even when its oversight process was ad hoc. Accordingly, we recommended that OMB issue needed guidance to agencies; we also made 135 recommendations to the 24 agencies to improve their policies and practices for managing licenses. Among other things, we recommended that the agencies regularly track and maintain a comprehensive inventory of software licenses and analyze the inventory to identify opportunities to reduce costs and better inform investment decision making. Most agencies generally agreed with the recommendations or had no comments. As of March 2018, 95 of the recommendations had not been implemented. Table 2 reflects the extent to which agencies implemented recommendations in these areas. Agencies Need to Ensure That IT Acquisitions Are Reviewed and Approved by Chief Information Officers FITARA includes a provision to enhance covered agency CIOs’ authority through, among other things, requiring agency heads to ensure that CIOs review and approve IT contracts. OMB’s FITARA implementation guidance expanded upon this section of FITARA in a number of ways. Specifically, according to the guidance: CIOs may review and approve IT acquisition strategies and plans, rather than individual IT contracts; CIOs can designate other agency officials to act as their representatives, but the CIOs must retain accountability; Chief Acquisition Officers (CAO) are responsible for ensuring that all IT contract actions are consistent with CIO-approved acquisition strategies and plans; and CAOs are to indicate to the CIOs when planned acquisition strategies and acquisition plans include IT. In January 2018, we reported that most of the CIOs at the 22 selected agencies were not adequately involved in reviewing billions of dollars of IT acquisitions. For instance, most of the 22 selected agencies did not identify all of their IT contracts. The selected agencies identified 78,249 IT-related contracts, to which they obligated $14.7 billion in fiscal year 2016. However, we identified 31,493 additional contracts with $4.5 billion obligated, raising the total amount obligated to IT contracts in fiscal year 2016 to at least $19.2 billion. Figure 8 reflects the obligations agencies reported to us relative to the obligations we identified. The percentage of additional IT contract obligations we identified varied among the selected agencies. For example, the Department of State did not identify 1 percent of its IT contract obligation dollars. Conversely, 8 agencies did not identify over 40 percent of their IT-related contract obligation dollars. Many of the selected agencies that did not identify these IT acquisitions did not follow OMB guidance. Specifically, 14 of the 22 agencies did not involve the acquisition office in their process to identify IT acquisitions for CIO review, as required by OMB. In addition, 7 agencies did not establish guidance to aid officials in recognizing IT. Until agencies involve the acquisitions office in their IT identification processes and establish supporting guidance, they cannot ensure that they will identify all IT acquisitions. Without proper identification of IT acquisitions, agencies and CIOs cannot effectively provide oversight of these acquisitions. In addition to not identifying all IT contracts, 14 of the 22 selected agencies did not fully satisfy OMB’s requirement that the CIO review and approve IT acquisition plans or strategies. Further, only 11 of 96 randomly selected IT contracts at 10 agencies that we evaluated were CIO- reviewed and approved as required by OMB’s guidance. The 85 IT contracts not reviewed had a total possible value of approximately $23.8 billion. Until agencies ensure that CIOs are able to review and approve all IT acquisitions, CIOs will continue to have limited visibility and input into their agencies’ planned IT expenditures and will not be able to use the increased authority that FITARA’s contract approval provision is intended to provide. Further, agencies will likely miss an opportunity to strengthen CIOs’ authority and the oversight of IT acquisitions. As a result, agencies may award IT contracts that are duplicative, wasteful, or poorly conceived. As a result of this report, we made 39 recommendations, including that agencies ensure that acquisition offices are involved in identifying IT and issue related guidance and ensure that IT acquisitions are reviewed according to OMB guidance. OMB and 20 agencies generally agreed with or did not comment on the recommendations. One agency agreed with one recommendation, but disagreed with another. The remaining agency disagreed with two recommendations. We subsequently removed one of these recommendations from the final report, but not the other. As of March 2018, all 39 recommendations remain open. Implementing Key IT Workforce Planning Activities Can Help Ensure Acquisition Skill Gaps Are Addressed An area where agencies can improve their ability to acquire IT is workforce planning. In November 2016, we reported that IT workforce planning activities, when effectively implemented, can facilitate the success of major acquisitions. Ensuring program staff have the necessary knowledge and skills is a factor commonly identified as critical to the success of major investments. If agencies are to ensure that this critical success factor has been met, then IT skill gaps need to be adequately assessed and addressed through a workforce planning process. In this regard, we reported that four workforce planning steps and eight key activities can assist agencies in assessing and addressing IT knowledge and skill gaps. Specifically, these four steps are: (1) setting the strategic direction for IT workforce planning, (2) analyzing the workforce to identify skill gaps, (3) developing and implementing strategies to address IT skill gaps, and (4) monitoring and reporting progress in addressing skill gaps. Each of the four steps is supported by key activities (as summarized in table 3). However, in our November 2016 report, we determined that the five agencies that we selected for in-depth analysis had not fully implemented key workforce planning steps and activities.For example, four of these agencies had not demonstrated an established IT workforce planning process. In addition, none of these agencies had fully assessed their workforce competencies and staffing needs regularly or established strategies and plans to address gaps in these areas. Figure 9 illustrates the extent to which the five selected agencies had fully, partially, or not implemented key IT workforce planning activities. The weaknesses identified were due, in part, to these agencies lacking comprehensive policies that required such activities, or failing to apply the policies to IT workforce planning. We concluded that, until these weaknesses are addressed, the five agencies risk not adequately assessing and addressing gaps in knowledge and skills that are critical to the success of major acquisitions. Accordingly, we made five recommendations to the five selected agencies to address the weaknesses in their IT workforce planning practices that we identified. Four agencies—the Departments of Commerce, Health and Human Services, Transportation, and the Treasury—agreed with our recommendations and one, the Department of Defense, partially agreed. As of March 2018, the agencies had not addressed the five recommendations. Agencies Need to Address Aging Legacy Systems IT investments across the federal government are becoming increasingly obsolete. Specifically, in May 2016, we reported that many agencies were using systems which had components that were, in some cases, at least 50 years old. For example, we determined that the Department of Defense was using 8-inch floppy disks in a legacy system that coordinates the operational functions of the nation’s nuclear forces. In addition, the Department of the Treasury was using assembly language code—a computer language initially used in the 1950s and typically tied to the hardware for which it was developed. Further, in some cases, the vendors were no longer providing support for hardware or software. For example, each of the 12 agencies in our review reported using unsupported operating systems and components. At the time, five of the selected agencies reported using 1980s and 1990s Microsoft operating systems that stopped being supported by the vendor more than a decade ago. Table 4 provides examples of legacy systems across the federal government that agencies report are 30 years old or older and use obsolete software or hardware, and identifies those that do not have specific plans with time frames to modernize or replace these investments. To address this issue, we recommended that 12 agencies identify and plan to modernize or replace legacy systems, including establishing time frames, activities to be performed, and functions to be replaced or enhanced. Most agencies agreed with our recommendations or had no comment. As of March 2018, all of the recommendations remained open. In conclusion, the federal government has an opportunity to save billions of dollars; improve the transparency and management of IT acquisitions and operations; and to strengthen the authority of CIOs to provide needed direction and oversight. The forum we held also recommended that CIOs be given more authority, and noted the important role played by the Federal CIO. Most agencies have taken steps to improve the management of IT acquisitions and operations by implementing key initiatives, including data center consolidation, efforts to increase transparency via OMB’s IT Dashboard, incremental development, management of software licenses, approval of IT acquisitions, implementation of IT workforce key practices, and addressing legacy IT; and they have continued to address recommendations we have made over the past several years. However, additional improvements are needed, and further efforts by OMB and federal agencies to implement our previous recommendations would better position them to improve the management of IT acquisitions and operations. To help ensure that these efforts succeed, OMB’s and agencies’ continued implementation of recommendations is essential. In addition, we will continue to monitor agencies’ implementation of our previous recommendations. Chairmen Meadows and Hurd, Ranking Members Connolly and Kelly, and Members of the Subcommittees, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. GAO Contacts and Staff Acknowledgments If you or your staff have any questions about this testimony, please contact Dave Powner, Director, Information Technology at (202) 512- 9286 or pownerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Kevin Walsh (Assistant Director), Chris Businsky, Rebecca Eyler, Meredith Raymond, and Jessica Waselkow (Analyst in Charge). This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study The federal government plans to invest almost $96 billion in IT in fiscal year 2018. Historically, these investments have too often failed, incurred cost overruns and schedule slippages, or contributed little to mission-related outcomes. In December 2014, Congress and the President enacted FITARA, aimed at improving covered agencies' acquisitions of IT. Further, in February 2015, GAO added improving the management of IT acquisitions and operations across government to its high-risk list. This statement summarizes agencies' progress in improving the management of IT acquisitions and operations. Among others, GAO summarized its published reports on (1) data center consolidation, (2) incremental software development practices, (3) IT acquisitions, (4) IT workforce, and (5) legacy IT. What GAO Found The Office of Management and Budget (OMB) and federal agencies have taken steps to improve the management of information technology (IT) acquisitions and operations through a series of initiatives, to include (1) data center consolidation, (2) implementation of incremental development practices, (3) approval of IT acquisitions, (4) implementation of key IT workforce practices, and (5) addressing aging legacy IT systems. As of March 2018, the agencies had fully implemented about 59 percent of the approximately 800 related recommendations that GAO made during fiscal years 2010 through 2015. However, important additional actions are needed. Consolidating data centers . OMB launched an initiative in 2010 to reduce data centers, which was codified and expanded by a law commonly referred to as the Federal Information Technology Acquisition Reform Act (FITARA). GAO has since noted that, while this initiative could potentially save the government billions of dollars, weaknesses exist in areas such as optimization and OMB's reporting on related cost savings. Accordingly, GAO has made 160 recommendations to OMB and agencies to improve the initiative; however, about half of GAO's recommendations have not yet been implemented. Implementing incremental development . OMB has emphasized the need for agencies to deliver investments in smaller increments to reduce risk and deliver capabilities more quickly. Further, GAO has issued reports highlighting actions needed by OMB and agencies to improve their implementation of incremental development. In these reports, GAO made 42 related recommendations, but the majority of GAO's recommendations have not yet been addressed. Approval of IT acquisitions . OMB's FITARA implementation guidance required covered agencies' chief information officers (CIO) to review and approve IT acquisition plans. In January 2018, GAO reported that many agencies' CIOs were not reviewing and approving acquisition plans, as required by OMB. GAO made 39 recommendations to improve the review and approval of IT acquisitions, but they have not yet been implemented by the agencies. Implementation of key IT workforce practices . Effective IT workforce planning can help agencies improve their ability to acquire IT. In November 2016, GAO reported on agencies' IT workforce planning activities. GAO noted that five selected agencies had not fully implemented key workforce planning activities and recommended that they do so, but the agencies have not yet addressed the recommendations. Addressing aging legacy IT systems. Legacy IT investments across the federal government are becoming increasingly obsolete and consuming an increasing amount of IT dollars. In May 2016, GAO reported that many agencies were using systems which had components that were, in some cases, at least 50 years old. GAO noted, however, that several agencies did not have specific plans with time frames to modernize or replace these investments. GAO recommended that 12 agencies plan to modernize or replace legacy systems; all of which have not yet been implemented. What GAO Recommends From fiscal years 2010 through 2015, GAO made about 800 recommendations to OMB and federal agencies to address shortcomings in IT acquisitions and operations. Among other recommendations, GAO made recommendations to improve the oversight and execution of the data center consolidation initiative, incremental development policies, the review and approval of IT acquisitions, implementation of key workforce planning activities, and aging federal IT systems. Most agencies agreed with GAO's recommendations. In addition, from fiscal year 2016 to present, GAO has made more than 200 new recommendations in this area. GAO will continue to monitor agencies' implementation of these recommendations.
gao_GAO-18-69
gao_GAO-18-69_0
Background Technology Sector The technology sector has major employment hubs across the country, including the San Francisco Bay area, the greater New York City region, and the Washington-Arlington-Alexandria region (see fig. 1). In addition, technology workers are employed at companies outside the technology sector, such as in the retail or financial services industries. For example, a large retail company may require technology workers to create and manage their online sales activities, but the company itself would be considered part of the retail industry. Federal Requirements Related to Equal Employment Opportunity and Affirmative Action Private companies are generally prohibited by federal law from discriminating in employment on the basis of race, color, religion, sex, national origin, age, and disability status. Additionally, federal contractors and subcontractors are generally required to take affirmative action to ensure that all applicants and employees are treated without regard to race, sex, color, religion, national origin, sexual orientation, and gender identity, and to employ or advance in employment qualified individuals with disabilities and qualified covered veterans. EEOC is responsible for enforcement of federal antidiscrimination laws, and OFCCP enforces affirmative action and nondiscrimination requirements for federal contractors. EEOC and OFCCP have some shared activities and have established a memorandum of understanding (MOU) to minimize any duplication of effort. For example, under the MOU, individual complaints filed with OFCCP alleging discrimination under Title VII are generally referred to EEOC. In addition, on occasions when EEOC receives a complaint not within its purview, such as cases that involve veteran status, but over which it believes OFCCP has jurisdiction, it will refer the complaint to OFCCP. U.S. Equal Employment Opportunity Commission The EEOC, created by Title VII of the Civil Rights Act of 1964, enforces federal laws that prohibit employment discrimination on the basis of race, sex, color, religion, national origin, age, and disability. As the nation’s primary enforcer of antidiscrimination laws, EEOC investigates charges of employment discrimination from the public, litigates major cases, and conducts outreach to prevent discrimination by educating employers and workers. In fiscal year 2016, EEOC received about 91,500 charges, secured more than $482 million for victims of discrimination, and filed 114 lawsuits. According to EEOC, many states, counties, cities, and towns have their own laws prohibiting discrimination, usually similar to those EEOC enforces, as well as agencies responsible for enforcing those laws, called Fair Employment Practices Agencies. However, in some cases, these agencies enforce laws that offer greater protection to workers. An individual can file a charge with either the EEOC or with a Fair Employment Practices Agency. When an individual initially files with a Fair Employment Practices Agency that has a worksharing agreement with the EEOC, and the allegation is covered by a law enforced by the EEOC, the Fair Employment Practices Agency will dual file the charge with EEOC (meaning EEOC will receive a copy of the charge), but will usually retain the charge for processing. If the charge is initially filed with EEOC and the charge is also covered by state or local law, EEOC dual files the charge with the state or local Fair Employment Practices Agency (meaning the Fair Employment Practices Agency will receive a copy of the charge), but EEOC ordinarily retains the charge for processing. EEOC also pursues a limited number of cases each year designed to combat systemic discrimination, defined by the agency as patterns or practices where the alleged discrimination presented by a complainant has a broad impact on an industry, profession, company, or geographic location. EEOC can also initiate a systemic investigation under Title VII with the approval of an EEOC commissioner, called a “commissioner charge”, provided the commissioner finds there is a reasonable basis for the investigation. In addition, EEOC district directors can approve systemic investigations, called “directed investigations” which are initiated by EEOC field office directors under the Age Discrimination in Employment Act and the Equal Pay Act. Under Title VII, EEOC generally requires that large employers and non- exempt federal contractors file Employer Information Reports (EEO-1 reports) annually, which collect employees’ demographic data by business location on sex, race, and ethnic group for 10 occupational job categories. According to EEOC documentation, EEO-1 data are used in investigations of Title VII violations, litigation, research, comparative analyses, class action suits, and affirmative action plans. Office of Federal Contract Compliance Programs The OFCCP is responsible for ensuring that the nearly 200,000 federal contractor establishments comply with federal nondiscrimination and affirmative action requirements. Under Executive Order 11246 and other federal laws and regulations, covered federal contractors and subcontractors are prohibited from discriminating in employment on the basis of race, color, religion, sex, sexual orientation, gender identity, or national origin and are required to take affirmative action to help ensure that all applicants and employees are treated without regard to these factors. In general, OFCCP’s regulations require covered contractors to comply with certain recordkeeping and reporting requirements, and provide for enforcement procedures such as compliance evaluations and complaint investigations to assist OFCCP in ensuring federal contractor compliance with these regulations. Among other provisions, OFCCP’s regulations generally require that covered contractors prepare and maintain an affirmative action program (AAP). Under OFCCP’s regulations, an AAP is a management tool that is designed to ensure equal employment opportunity, with an underlying premise that the gender, racial, and ethnic makeup of a contractor’s workforce should be representative of the labor pools from which the contractor recruits and selects. Companies must create an AAP for each business establishment—generally, a physical facility or unit that produces the goods or services, such as a factory, office, or store for the federal contractor. An AAP will also include any practical steps to address underrepresentation of women and minorities, such as expanding employment opportunities to underrepresented groups. Covered contractors must also comply with certain recordkeeping requirements, including records pertaining to hiring, promotion, lay off or termination, rates of pay, and applications, among other records. OFCCP’s enforcement program represents the majority of the agency’s activity and is carried out primarily by using compliance officers, who evaluate contractors’ compliance with various requirements, according to agency officials. In addition to conducting compliance evaluations, OFCCP also conducts investigations in response to complaints. In 2016, we reported that according to OFCCP officials, responding to complaints accounted for close to 16 percent of OFCCP’s enforcement activities. OFCCP selects contractor establishments for evaluations based on a number of neutrally applied factors, such as employee count at the establishment, contract value, or contract expiration date. We previously found that OFCCP reviews, on average, 2 percent of federal contractor establishments annually. As we previously reported, as part of its compliance evaluations, OFCCP is to review the selected contractor’s hiring, promotion, compensation, termination, and other employment practices to determine whether contractors are maintaining nondiscriminatory hiring and employment practices. OFCCP conducts evaluations at the establishment level. When a contractor establishment is selected for evaluation, OFCCP sends the contractor a “scheduling letter” requesting the AAP and supporting data, such as the percentage of women and minority staff at the workplace by job group. Then, a compliance officer is to conduct a desk audit, which is an off-site review of the submitted materials. If necessary, the compliance officer may also conduct an on-site review or further off-site analysis to make a final determination as to whether the contractor is in compliance. In addition to looking at whether federal contractors maintain nondiscriminatory hiring and employment practices, which can result in finding discrimination violations, OFCCP also frequently finds other types of violations, such as failure to keep necessary records or conduct annual reviews of equal employment and affirmative action efforts. These findings by the agency often require administrative changes on the part of the contractor, such as improved record-keeping. There are many different forms of remedies for discrimination violations, including financial, employment, and organizational change remedies. Although rare, under some circumstances, OFCCP may bar a contractor from doing business with the government. Technology Workforce Grew between 2005-2015, but Women and Some Minority Groups Continued to be Less Represented Compared to General Workforce, the Technology Workforce Grew at a Higher Rate and Continued to be More Educated and Better Paid From 2005 to 2015, the estimated number of workers in the technology workforce—people who worked in mathematics, computing, or engineering occupations—increased at a higher rate (24 percent) than the estimated number of workers in the general workforce (9 percent), according to ACS data. In 2015, the technology workforce comprised an estimated 7.5 million workers, an increase of slightly over 1.4 million workers since 2005. (For a complete list of the occupations we include as technology occupations, see appendix II). Most technology workers have a college degree and have a higher median income than workers in the general workforce. Specifically, in 2015, an estimated 69 percent of technology workers held at least a bachelor’s degree, compared to 31 percent of workers in the general workforce. In 2015, the estimated median income for technology workers was $81,000 compared to $42,000 for the general workforce. Women and Certain Minority Groups Continued to be Less Represented in the Technology Workforce and Sector Comparison of Technology Workforce to General Workforce, by Gender and Race From 2005 to 2015, the percentage of women in the technology workforce remained flat and women remained a smaller proportion of the technology workforce compared to their representation in the general workforce. In 2015, women represented 22 percent (about 1.6 million workers) of workers in technology occupations, compared to 48.7 percent of workers in the general workforce (see fig. 2). Although the estimated percentage of minority technology workers as a whole had grown since 2005, we found that this trend did not apply to Black technology workers. Specifically, from 2005 through 2015, although the number of Black workers increased as the technology workforce grew, there was no statistically significant change in their representation as a percentage of the entire technology workforce. In contrast, from 2005 to 2015, Hispanic and Asian technology workers had statistically significant increases in their representation in the technology workforce. Even with the increase in their numbers in the technology workforce, Black and Hispanic technology workers remained a smaller proportion of these workers compared to their representation in the general workforce. In contrast, Asian workers were an increasing share of the technology workforce, where they remained more represented than they were in the general workforce (see fig. 3). We found that when we examined gender representation for each minority group, both Black and Hispanic men and women were less represented in the technology workforce compared to their representation in the general workforce. The same was true for White women, whereas White men, Asian men, and Asian women were more represented in the technology workforce compared to their representation in the general workforce (see fig. 4). We defined the technology sector as those companies that have the highest concentration of technology workers and are in such industries as computer systems design and software publishing. Companies categorized as outside the technology sector, for example, retail or finance companies, may still employ some technology workers. However, we found differences in median incomes for technology workers within and outside the technology sector. In 2015, technology workers employed in the technology sector earned an estimated median income of $89,000 compared to median incomes of $78,000 for those working outside the technology sector. We also compared the characteristics of technology workers within the technology sector and outside the technology sector, and found male and Asian technology workers were relatively more represented in the technology sector than outside the technology sector. Similar to the lower representation of female, Black, and Hispanic technology workers in technology occupations, we found technology workers from these groups were also more likely to work outside the technology sector than in the technology sector. For example, according to our analysis of 2015 ACS data, women represented an estimated 18 percent of all technology workers employed in the technology sector, compared to 25 percent of all technology workers employed outside the technology sector (see fig. 5). White technology workers were also more represented outside the technology sector than within the technology sector. Companies in the technology sector also employ non-technical workers such as sales people, and the lower representation of women and certain minorities in the technology sector was also present in such non-technical job categories. According to our analysis of EEO-1 data, women were less represented across the full range of management and non- management positions at companies within the technology sector, including at leading technology companies, compared to their representation in companies outside the technology sector. We determined this by comparing specific occupations at companies both within and outside the technology sectors using 2015 EEO-1 data. For example, women held about 19 percent of senior-level management positions at companies in the technology sector compared to nearly 31 percent of such positions at companies outside the technology sector in 2015. Women were also less represented in all of the remaining job categories (mid-level managers, professionals, technicians, and all other jobs) in the technology sector. (See fig. 6.) Comparing EEO-1 data at three points in time for 2007, 2011, and 2015, we found women’s representation in management positions as well as among professionals and technicians at companies within the technology sector remained at about the same level, and decreased for “all other jobs” (see table 1). Similar to women, Black and Hispanic workers were less represented across multiple job categories in companies within the technology sector compared to those outside the technology sector (see fig.7). For example, 1.8 percent of senior level managers in the technology sector were Black compared to 3.4 percent of senior level managers in all other sectors. Appendix IV provides percentages for each minority group in different job categories within and outside the technology sector. The lower representation of Black workers in the technology sector relative to their representation in other sectors was consistent across all job categories (mid-level managers, professionals, technicians, and “all other jobs”). Hispanic workers were less represented in the technology sector compared to outside the technology sector across all job categories (senior and mid-level managers, professionals, technicians, and “all other jobs”). Compared to their representation across job categories within the technology sector in general, Black and Hispanic workers had slightly greater representation at the leading technology companies in senior management and technician categories, and lower representation among mid-level managers, professionals, and holders of “all other jobs.” Asian workers comprised a greater proportion of managerial and professional roles in the technology sector than in other sectors, according to our analysis of 2015 EEO-1 data. Asian workers represented 11.0 percent of senior level managers in the technology sector compared to 4.3 percent in industries outside the technology sector. This higher representation of Asian workers in the technology sector was consistent among mid-level managers, professionals and technicians. Asian workers were more represented in the same categories at the leading technology companies. However, a lower proportion of Asian workers held senior management positions compared to their representation in professional positions in both the technology sector and leading technology companies. Further, the proportion of Asian workers in mid-level management positions was also lower than their representation in professional positions, from which mid-level managers might be selected, in both the technology sector and leading technology companies. In contrast, a higher proportion of White workers were in senior and mid- level management positions compared to their representation in professional positions in both the technology sector and leading technology companies. Comparing EEO-1 data at three points in time—2007, 2011, and 2015— we saw varied representation across job categories in the technology sector by race/ethnicity. For example, Black workers decreased in their representation in all job categories in the technology sector from 2007 to 2015. In contrast, Hispanic and Asian workers increased in their representation in all job categories we examined from 2007 to 2015 (see table 2). Several Factors May Contribute to the Lower Representation of Women and Certain Minority Groups in the Technology Workforce Several factors may contribute to the lower representation of female, Hispanic, and Black workers in the technology workforce and at companies in the technology sector, based on research and interviews with researchers and representatives from workforce and industry organizations and technology companies. These include the lower diversity of degree earners in technology-related fields, and company- based factors such as hiring practices and retention of women and underrepresented minorities. The smaller proportion of women in the technology workforce may reflect the number of women earning technology-related degrees. Slightly over two-thirds of technology workers report having earned their bachelor’s degree in a computer, engineering, mathematics, or technology field. However, according to our analysis of 2014 IPEDS data, the percentage of technology-related bachelor’s and master’s degrees earned by women is far less than for men, although women were comparable to men in their receipt of science, technology, engineering, and math (STEM) degrees, and surpassed men in obtaining degrees in all other fields. In 2014, about 60,000 women were awarded technology-related bachelor’s or master’s degrees (compared to about 50,000 in 2004) and about 190,000 men were awarded such degrees (compared to about 147,000 in 2004). (See fig. 8.) An estimated 218,000 technology workers were added to the technology workforce in 2015, according to our analysis of 2015 American Community Survey data from the U.S. Census Bureau. In addition, technology degrees are also issued at the associate’s level. Two researchers told us that women often have the academic preparation to enter into technology-related degree programs, but they may choose not to pursue such degrees because of instances of gender bias within technology classes. Our prior work reported on studies that found women leave STEM fields at a higher rate than their male peers, citing one study that found women leave STEM academic positions at a higher rate than men in part due to dissatisfaction with departmental culture, faculty leadership, and research support. Further, a 2012 consulting firm report found that businesses viewed as male-dominated tended to attract fewer women at the entry level. In addition, according to our analysis of 2014 IPEDS data, three minority racial or ethnic groups each constituted 10 percent or fewer of bachelor’s and master’s degree earners in a technology-related field. Specifically, among the 202,200 earners of degrees in a technology-related field in 2014, there were about 20,000 Hispanic recipients, 13,000 Black recipients, and 18,000 recipients who were Multiracial or other race, which includes American Indian or Alaska Native, Other or Unknown Race, and Two or more Races, i.e. respondents who selected one or more racial designations. Among all minority groups, Asian students, including Pacific Islander, earned the highest proportion of technology- related degrees (about 24,000 individuals). (See fig. 9). One barrier to entry into technology degree paths for Black and Hispanic students may be lower likelihood of access to preparatory academic programs in secondary school. In 2016, we reported that the K-12 public schools in the United States with students who are mostly Black or Hispanic offered disproportionately fewer math and science classes for their students. One researcher told us some colleges and universities, to help these students be academically successful, provide additional academic support such as tutoring to help bridge knowledge gaps. To address the uneven access to preparatory math and science classes, representatives from five technology companies told us they have started to invest in exposing Black and Hispanic children to technology occupations by, for example, developing online resources targeted to them and their parents and creating partnerships with secondary schools to improve their academic preparation in computer science. However, we have previously also reported that the number of students graduating with STEM degrees may not be a good measure of the supply of STEM workers because students often pursue careers in fields different from the ones they studied. For example, a lower percentage of women who obtained technology-related degrees became technology workers compared to men who earned the same degrees, according to our analysis of 2015 ACS data. Specifically, among women who earned technology degrees, an estimated 33 percent worked as a technology worker compared to 45 percent of men who earned technology degrees. Several representatives we interviewed from workforce and industry organizations and technology companies told us that recruitment practices may also have affected diversity in the technology workforce. For example, representatives from three workforce and industry organizations said technology companies tend to recruit from a select number of universities and colleges, thereby limiting their pool of potential applicants. To address this, representatives from several of the technology companies we interviewed told us they had changed recruitment practices and offered internships targeted to underrepresented groups. For example, representatives of four technology companies told us that their companies had expanded recruitment to include more schools. Representatives from two companies told us they offer programs such as summer and semester internships for which the company actively recruits from Historically Black Colleges and Universities and other specific schools to increase its pool of diverse candidates. In addition, representatives from workforce organizations and technology companies discussed concerns and strategies to address companies’ hiring practices and internal cultures that may limit workforce diversity. For example, one of these representatives said that technology companies often offer financial incentives to current employees to make referrals for new hires, which can result in reliance on social networks. These networks may be largely comprised of the same race and this practice therefore makes it harder for potential candidates from demographically different groups to have their resumes reviewed. Another workforce organization representative reported that some hiring managers filter out eligible candidates if their background and qualifications are not the same as those of previously successful employees. To address these concerns, representatives from one technology company told us that they had moved away from depending on referrals since this practice may result in leaders hiring people within their own networks, which generally does not increase diversity of gender or race/ethnicity. In addition, representatives from another company said they plan to begin reviewing resumes with names removed to limit bias by the reviewer. Further, representatives we interviewed from three technology companies told us they offer training to employees to help employees identify their own, unconscious biases. Other factors may affect retention of women and underrepresented minorities. For example, a representative from a workforce organization said that women leave technology occupations at a higher rate than men because they feel as if they have not been given the same opportunities for promotion and advancement within the company. A 2016 study that examined women in engineering and science found that women’s concerns about pay and promotion are often an issue in male-dominated fields regardless of the industry. Further, this study found that retention difficulties become more severe as the share of men in the workforce increased and that affected women’s pay and promotion. Representatives from one company told us another challenge is the lack of Black workers at the top levels, which might make it more difficult for Black employees in particular to see a leadership path. Representatives we interviewed from five technology companies told us they had implemented efforts to increase retention and promotion rates among minority and female workers, for example, by developing a diversity and inclusion newsletter, employee resource groups with executive sponsors, and internal training and classes for employees to improve their readiness to be promoted. Representatives from five technology companies told us that commitment of top leadership is an important factor that can help women and underrepresented minorities in the technology sector. For example, representatives from one company told us that top management support for diversity efforts, such as setting hiring goals, can help move a company in the direction of achieving representation goals and that leadership is very important to this effort. Representatives from several companies told us that there is often a business case for such changes: These companies work in a diverse, global environment and strive to make better products for diverse users. However, our prior work on workforce diversity in the financial services sector found that some diversity initiatives faced challenges gaining the "buy-in" of key employees, such as the middle managers who are often responsible for implementing such programs. EEOC and OFCCP Have Taken Steps to Oversee Equal Employment Opportunity and Affirmative Action Requirements, but Face Limitations EEOC and OFCCP Have Taken Steps to Oversee Compliance in the Technology Sector According to EEOC officials, EEOC primarily oversees compliance with equal employment opportunity requirements by investigating workers’ individual charges of employment discrimination filed against companies. EEOC has publicly acknowledged the low levels of diversity in the technology sector. However, we were unable to identify a specific number of charges received by EEOC against companies in industries that are part of this sector because EEOC does not require investigators to record the industry of the charged company. EEOC’s database of charges and enforcement actions—the Integrated Mission System (IMS)—has a data field for the North American Industry Classification System (NAICS) industry code, the standard used by federal statistical agencies in classifying business establishments. However, we found that it is completed for only about half the entries in the system. EEOC officials in both the San Francisco and New York district offices told us that, while they cannot readily identify individual charges against technology companies, they believe they have received far fewer charges against technology companies than they would have expected given the public attention to the issue of diversity in the technology sector. In terms of systemic cases, according to EEOC, as of June 2017, the commission had 255 systemic cases pending since fiscal year 2011 involving technology companies (13 of these were initiated as commissioner charges and 8 were directed investigations involving age discrimination or pay parity issues). Officials from the New York region reported that they had seen an increase in systemic cases against technology companies in the past 3 years, largely involving practices of information technology staffing firms. Several EEOC officials we interviewed noted that technology workers may be initiating few complaints at the federal level due to factors such as fear of retaliation from employers or the availability of other employment or legal options. According to EEOC officials, fear of retaliation can affect charges across sectors and, given the growth in the technology workforce, an individual who feels discriminated against may simply leave the company because there are many other opportunities for individuals with technical skills. They also said that technology workers may generally have greater wealth and can afford to hire private attorneys to sue in state court rather than go through the EEOC. Moreover, they said that some states, including California, have stronger employment discrimination laws that allow for better remedies than federal laws, which could lead employees to file charges at the state level rather than with the EEOC. In addition, EEOC has acknowledged in a 2016 report that binding arbitration policies, which require individuals to submit their claims to private arbiters rather than courts, can also deter workers from bringing discrimination claims to the agency, leaving significant violations in entire segments of the workforce unreported. The report stated that an increasing number of arbitration policies have added bans on class actions that prevent individuals from joining together to challenge practices in any forum. The report concluded that the use of arbitration policies hinders EEOC’s ability to detect and remedy potential systemic violations. Researchers report that the use of such clauses has grown and data on federal civil filings for civil rights employment cases reflect a marked reduction in the number of such filings. Beyond pursuing charges, EEOC has taken some steps to address diversity in the technology sector including research and outreach efforts. In May 2016, citing the technology sector as a source for an increasing number of U.S. jobs, EEOC released a report analyzing EEO-1 data on diversity in the technology sector in tandem with a commission meeting raising awareness on the topic. In addition, EEOC’s fiscal year 2017- 2021 Strategic Enforcement Plan identified barriers to hiring and recruiting in the technology sector as a strategic priority. EEOC has also been involved in outreach efforts with the technology sector. For example, the EEOC Pacific Region described more than 15 in-person or webinar events since 2014 in collaboration with OFCCP and local organizations focused on diversity in the technology sector. The topics of these events included equity in pay and the activities of these two agencies in enforcing nondiscrimination laws. Finally, in fall 2016, EEOC initiated an internal working group to identify practices to help improve gender and racial diversity in technology, but as of June 2017 had no progress to report. OFCCP’s regulations require covered federal contractors to take proactive steps to ensure equal employment opportunity. OFCCP annually conducts routine evaluations of selected federal contractors, which includes those in the technology sector, for compliance with federal nondiscrimination and affirmative action requirements. To the extent that technology contractors are selected for evaluation through OFCCP’s normal selection process, these contractors are assessed for compliance with nondiscrimination and affirmative action laws as are other selected contractors. While evaluation of technology contractors occurs in the course of OFCCP’s routine activities, OFCCP does not currently use type of industry as a selection factor, according to officials. We also found that few (less than 1 percent) of OFCCP’s 2,911 closed technology contractor evaluations from fiscal years 2011 through 2016 resulted in discrimination violations, though 13 percent resulted in other violations, such as record- keeping violations and failure to establish an affirmative action program (AAP). An AAP is a key tool OFCCP requires contractors to complete to ensure equal employment opportunity. The remaining 86 percent of evaluations either found no violations or ended in administrative closure. Technology contractor evaluations that had discrimination violations resulted in back pay, salary adjustments, or other benefits totaling more than $4.5 million for 15,316 individuals (averaging about $300 per award) for fiscal years 2011 through 2016. The vast majority of discrimination violations were on the basis of gender or race/ethnicity rather than disability or veteran status. Corrective actions OFCCP identified for federal technology contractors over this timeframe also included requiring contractors to fill a total of 410 job vacancies as they arise with applicants who had been denied employment on the basis of discrimination. In addition, OFCCP recently filed three complaints against technology companies. According to our analysis, OFCCP conducted evaluations on 36 of the 65 leading technology companies from fiscal year 2011 through fiscal year 2016. During this timeframe there were 272 reviews of establishments— physical business locations—affiliated with these 36 companies. Based on these evaluations, 15 of the 36 companies had administrative violations, and 2 of the 36 also had discrimination violations. As a result of the discrimination findings against these leading technology companies, 541 individuals received monetary benefits totaling $783,387 (an average of $1,448 per award). In terms of other steps to conduct oversight of the technology sector, OFCCP officials in the Pacific Region said they are hiring compliance officers with legal training to be better able to address needs for reviews in the technology sector, such as responding to lawyers representing technology contractors. Officials in both the Pacific and Northeast regions work closely with statisticians and labor economists on their cases, an effort officials said has increased over the past few years. OFCCP has also requested funding in its fiscal year 2018 congressional budget justification to establish centers in San Francisco and New York that would develop expertise to handle large, complex compliance evaluations in specific industries, including information technology. EEOC Cannot Analyze Charge Data by Industry to Identify Priorities and OFCCP Faces Challenges to Oversight of Technology Companies We found that by not requiring an industry code in its investigations data, EEOC cannot analyze charge data by industry to help identify investigation and outreach priorities, in contradiction to EEOC strategic planning documents and EEOC Inspector General reports, which have emphasized the importance of doing so. By not requiring the use of the NAICS code for each entry in IMS, EEOC is limited in its ability to use these data for the purposes of identifying charges by industry sector and conducting sector-related analyses. Officials were aware of substantial gaps in coding of charges by industry and acknowledged limitations in the commission’s ability to analyze its investigations data by industry. However, officials expressed concern that routinely creating more complete records of the companies against which charges had been filed would require investigators to divert attention from their efforts to investigate charges. EEOC officials explained that the charging party provides initial information on the respondent company and requiring EEOC personnel to generate this information would slow down the process. They said their priority is to investigate individual charges, not to address larger trends or target specific industries. “The Strategic Enforcement Plan recommends using EEOC data to allow our enforcement and outreach efforts to focus on areas of significant concern. This might include tailoring outreach efforts for industries that experience greater likelihood of certain charges or informing enforcement decisions based on knowledge that certain industries have persistent problems, such as harassment. The data maintained in IMS provide a rich resource of information that can be used to explore the characteristics of industries that appear to have higher levels of certain allegations than comparative industries.” In addition, reports completed by the Urban Institute for the EEOC Office of Inspector General in 2013 and 2015 similarly recommended analysis of charge data, including by industry, to help identify priorities and measure performance. While EEOC has plans to review a year of IMS data to clean it and determine how best to add missing industry codes, among other objectives, officials could not provide a specific timeframe for when this review would begin and end. Standards for internal control in the federal government state that management should use quality information to achieve the agency’s objectives and objectives should be defined in specific terms so they are understood at all levels of the entity. This involves clearly defining what is to be achieved, who is to achieve it, how it will be achieved, and the time frames for achievement. Efforts to scrub these data and identify missing codes could help EEOC determine how to collect industry information on an ongoing basis for all entries. Doing so would also help EEOC determine the level of NAICS code that would be feasible and useful for investigators to identify and input into IMS. Without analyzing its data on charges across industries, EEOC’s ability to proactively identify priorities for its outreach and enforcement resource use is limited. We found that OFCCP also faces challenges that may hinder the agency’s oversight of technology companies. Specifically, OFCCP reported facing delays in receiving information from federal contractors, including technology companies, but has not yet evaluated whether its own policies and practices also impede its efforts to hold federal technology contractors responsible for the legal requirements to take affirmative action and not discriminate against protected groups. In addition, OFCCP regulations do not require federal contractors to disaggregate data for the purpose of determining placement goals for hiring, which may hinder contractors’ efforts to implement effective affirmative action programs. OFCCP has not analyzed delays in obtaining information from contractors OFCCP officials told us that they face delays in obtaining complete, accurate, and timely documentation from federal contractors, including technology companies, as part of the compliance review process. They said this limited their access to critical information and hindered OFCCP’s ability to determine whether discrimination had occurred. Officials in the Pacific Region reported that when issues are identified during OFCCP’s initial review that will require additional data, the data requests can be extensive. Consequently, technology contractors are taking longer to submit complete and accurate data that are needed to conduct analyses of the contractor’s workforce. In addition, officials in both the Pacific and Northeast regions reported that companies may not provide raw data as requested, or provide access to employees for OFCCP to interview, which is part of the compliance review process. Using 2015 OFCCP compliance evaluation data, we previously reported that close to 85 percent of contractor establishments across all sectors did not submit an AAP within 30 days of being scheduled for an OFCCP compliance evaluation, as required by OFCCP policy. Officials told us of the potential need for a more flexible set of investigatory tools or sanctions, such as subpoena power to speed up data-gathering or penalties for delays in providing information, in order to obtain accurate and timely information. In the case of incomplete data, OFCCP officials said one option is to enter into an agreement with the contractor whereby the contractor will gather the missing data, and OFCCP will monitor the contractor’s efforts and review detailed records at a later date. However, they said that such an agreement could give the contractor an opportunity to modify the data in the contractor’s favor. Currently, OFCCP’s primary sanction is the threat of debarment, which makes a company ineligible to receive future federal contracts. At the same time, OFCCP officials acknowledged there may additionally be delays in their own review processes. In prior work, we’ve reported concerns by contractors and industry groups about lengthy and expansive OFCCP evaluations. However, OFCCP has not analyzed its data on closed evaluations to assess the cause of delays, which would help determine whether changes should be made to its internal processes or if stronger sanctions to obtain information from contractors are needed. Internal control standards state that management should identify, analyze, and respond to risks related to achieving its objectives. Further, it states that management should design appropriate mechanisms to enforce its directives to achieve those objectives and address related risks. Without more information on the root cause of the delays, these delays may continue, straining resources and inhibiting OFCCP’s efforts to identify potential discrimination. “An affirmative action program is a management tool designed to ensure equal employment opportunity. A central premise underlying affirmative action is that, absent discrimination, over time a contractor’s workforce, generally, will reflect the gender, racial and ethnic profile of the labor pools from which the contractor recruits and selects. Affirmative action programs contain a diagnostic component which includes a number of quantitative analyses designed to evaluate the composition of the workforce of the contractor and compare it to the composition of the relevant labor pools. Affirmative action programs also include action- oriented programs. If women and minorities are not being employed at a rate to be expected given their availability in the relevant labor pool, the contractor’s affirmative action program includes specific practical steps designed to address this underutilization.” “The placement goal-setting process . . . contemplates that contractors will, where required, establish a single goal for all minorities. In the event of a substantial disparity in the utilization of a particular minority group or in the utilization of women or women of a particular minority group, a contractor may be required to establish separate goals for those groups.” According to OFCCP officials, a contractor may be required to establish separate goals for particular minority groups as part of a compliance review. We found, however, that OFCCP’s regulations do not require federal contractors to disaggregate demographic data for the purpose of establishing placement goals in their AAP. This may hinder their efforts to implement effective AAPs, which are designed to assist the company in achieving a workforce that reflects the gender, racial, and ethnic profile of the labor pools from which the contractor recruits and selects. OFCCP officials in headquarters and in the field said, based on their experience evaluating companies’ compliance, it was not common for companies to have placement goals disaggregated by race and ethnicity in their AAPs. A diversity and inclusion officer we interviewed from one large technology contractor noted that the requirement in the AAP to identify the need for placement goals for minorities as a whole does not address underrepresentation in certain minority groups. According to the officer, the company does not count Asian workers in setting the company’s diversity goals because Asians are well represented and the company believes it should set a placement goal for groups for which the company knows it needs to make progress. Citing comments received during development of other regulations, OFCCP officials cautioned that an analysis of utilization disaggregated by race/ethnicity may be more challenging for smaller companies with fewer employees. Further, looking at trends in diversity for minorities as a whole may not assist a company’s affirmative action efforts to identify groups that need particular outreach or support. Specifically, our analysis of workforce data found differences in representation for Black and Hispanic workers in the technology workforce compared to Asian workers. Under the current AAP regulations, companies may opt not to detect and address underrepresentation of particular minority groups since OFCCP does not require placement goals disaggregated by race/ethnicity. While OFCCP may be able to detect underrepresentation of particular minority groups during its reviews, the office reviews only 2 percent of federal contractor establishments each year. OFCCP officials said that they would need to amend their regulations in order to require disaggregated race/ethnicity information for placement goals on AAPs. The officials said disaggregating race in placement goals could help an establishment determine how to tailor outreach accordingly or better identify impediments to its equal employment opportunity efforts. However, they have not pursued this regulatory change because of competing priorities on their regulatory agenda. OFCCP’s mission includes holding federal contractors responsible for the legal requirements to take affirmative action and not discriminate against protected groups. However, not requiring contractors to set placement goals for each minority group may hinder OFCCP’s ability to effectively achieve this mission. OFCCP has not reviewed key aspects of its current approach to evaluations OFCCP officials report the agency intends to incorporate additional information on gender, racial, and ethnic disparities by industry into its compliance evaluation selection process, but we found the methodology to determine the disparities may have weaknesses. We have previously reported on the challenges OFCCP faces with its enforcement efforts, and identified additional areas that may limit OFCCP’s enforcement of federal contractors’ equal employment and affirmative action efforts. For example, our 2016 report found that OFCCP’s weak compliance evaluation selection process, reliance on voluntary compliance, and lack of staff training create several challenges to its enforcement efforts. This report found that because OFCCP was not able to identify which factors are associated with risk of noncompliance, the agency does not have reasonable assurance that it is focusing its efforts on those contractors at greatest risk of not following nondiscrimination or affirmative action requirements. OFCCP agreed with recommendations we made to address these areas and detailed steps the agency would take. In particular, to strengthen its compliance evaluation process to select contractors at greatest risk of potential discrimination, the agency stated that it planned to incorporate information on pay disparities and employment disparities. OFCCP officials indicated this information would be based on analysis of gender and race/ethnicity by industry using ACS data and EEO-1 compensation data that was to be collected beginning March 2018. However, in August 2017, the Office of Management and Budget issued a memo suspending the pay-related data collection aspects of the EEO-1 form. Despite this change, OFCCP officials said they are exploring other options for focusing on compensation disparities by industry, including through the use of ACS data, administrative data, a previous study conducted by the Department of Labor, as well as options proposed by contractors. We also found OFCCP’s current methodology for identifying disparities by industry with the ACS data may have some weaknesses that could affect the accuracy of the outcomes. For example, its reliance on the broadest industry level available may not sufficiently identify specific industries at elevated risk. Further, the methodology includes future plans to conduct the analysis for metropolitan areas. Given the importance of regional and local labor markets for assessing affirmative action efforts, regional and local analysis should also be completed before OFCCP incorporates this analysis into its selection process. It is important that OFCCP use reliable information in modifying its basic processes and setting priorities. For the reasons cited earlier regarding the importance of using quality information to make management decisions, it is important that OFCCP assess the quality of the methods for its analysis of employment disparities among industries. Without doing so, OFCCP may not accurately identify industries at greatest risk of potential noncompliance with nondiscrimination and affirmative action requirements so it can focus its limited investigation resources most effectively. Further, according to OFCCP officials, although the agency has made slight changes to various thresholds and factors for its selection process, the agency has not made any significant changes to the selection process for about 10 years, and has made no changes to its establishment-based approach since OFCCP was founded in 1965. While OFCCP currently grounds its review of a contractor in a particular physical establishment, OFCCP officials acknowledged the changing nature of a company’s work can involve multiple locations and corresponding changes in the scope of hiring and recruitment. Officials we interviewed from five of our eight selected technology companies discussed their work spread across locations, including the United States or overseas, and the related challenges they face with OFCCP’s establishment-based approach to reviews. One company representative said the AAP is not useful because site specific plans do not connect to business decisions. However, OFCCP has not reviewed the implications for the effectiveness of its mission of continuing with its establishment-based approach to conducting compliance evaluations. In addition, OFCCP officials acknowledged their inability, in identifying establishments for review, to consistently identify and include all subcontractors to which OFCCP rules should apply. They said the agency has not assessed the potential significance of any omissions of subcontractors from the oversight process. Internal control standards state that management should identify risks throughout the entity related to achieving its defined objectives to form a basis for designing risk responses, as well as the importance of periodically reviewing policies, procedures and related control activities for continued relevance and effectiveness in achieving the agency’s objectives. OFCCP officials said they have informally discussed how to adjust their work based on how work is performed in today’s economy—with virtual sites, workplace flexibilities, and nontraditional forms of employment. However, due to competing priorities, they have not conducted a formal review of these key aspects of its current approach to selecting entities for review. They acknowledged such a review would be useful. Without assessing its current approach to its establishment-based reviews and identification of all relevant subcontractors, OFCCP does not have reasonable assurance that its approach can identify discrimination occurring within the companies it oversees and may be missing opportunities to identify more effective practices or adjust its methods to external changes. While OFCCP has offered an option—the Functional Affirmative Action Program (FAAP)—for companies to move away from establishment- based reviews and which may be more appropriate for some multi– establishment contractors, uptake has been low and the agency has not conducted an evaluation of this program. Since 2002, OFCCP has allowed companies to create FAAPs, with OFCCP approval, which are based on a business function or unit that may exist at multiple establishments. As of May 2017, 73 companies across all industries had FAAPs in place. Further, some of the companies we interviewed were unaware that the FAAP was an option or believed it was cumbersome to establish given the complexity of their workforce. Asked why the FAAP has not been more broadly adopted, OFCCP officials hypothesized it could have to do with a requirement intended to ensure that companies with FAAPs would be reviewed at least as often as others, but that may result in these companies being reviewed more often than most. Standards for internal control for government agencies state that management should periodically review policies, procedures, and related control activities for continued relevance and effectiveness in achieving the entity’s objectives. Reviewing and refining the FAAP program could help OFCCP improve its ability to achieve its objectives and may provide broader insight for OFCCP’s overall enforcement approach. Conclusions Jobs in the high paying technology sector are projected to grow in coming years. Female, Black, and Hispanic workers, however, comprised a smaller proportion of technology workers compared to their representation in the general workforce from 2005 through 2015, and have also been less represented among technology workers inside the technology sector than outside it. Both EEOC’s and OFCCP’s mission is to combat discrimination and support equal employment opportunity for U.S. workers; however, weaknesses in their processes impact the effectiveness of their efforts. When conducting investigations, EEOC has not been consistently capturing information on industry codes. This impedes its ability to conduct industry sector analysis that could be used to more effectively focus its limited enforcement resources and outreach activities. Similarly, OFCCP faces delays in its compliance review process but it has not analyzed its closed evaluations to understand the causes of these delays and whether its processes need to be modified to reduce them. In addition, as part of their affirmative action programs federal contractors are only required to set placement goals for all minorities in general. By not requiring contractors to disaggregate demographic data for the purpose of establishing placement goals, OFCCP has limited assurance that these contractors are setting goals that will address potential underrepresentation in certain minority groups. Further, OFCCP plans to incorporate information on disparities by industry into its process for selecting establishments for compliance evaluations, but has not fully assessed its planned methods. Without such assessment, OFCCP may use a process that does not effectively identify the industries at greatest risk of potential noncompliance with nondiscrimination and affirmative action requirements. In addition, key aspects of OFCCP’s approach to compliance reviews of contractors’ affirmative action efforts have not changed in over 50 years, whereas the structure and locations of these companies’ work have changed. Finally, although OFCCP has developed an alternative affirmative action program for multi-establishment contractors, few contractors participate in this program. Because OFCCP has not evaluated the program, it does not have information to determine why there has not been greater uptake and whether it provides a more effective alternative to an establishment-based AAP. Recommendations for Executive Action We are making a total of six recommendations, including one to EEOC and five to OFCCP. Specifically: The Chair of the EEOC should develop a timeline to complete the planned effort to clean IMS data for a one-year period and add missing industry code data. (Recommendation 1) The Director of OFCCP should analyze internal process data from closed evaluations to better understand the cause of delays that occur during compliance evaluations and make changes accordingly. (Recommendation 2) The Director of OFCCP should take steps toward requiring contractors to disaggregate demographic data for the purpose of setting placement goals in the AAP rather than setting a single goal for all minorities, incorporating any appropriate accommodation for company size. For example, OFCCP could provide guidance to contractors to include more specific goals in their AAP or assess the feasibility of amending their regulations to require them to do so. (Recommendation 3) The Director of OFCCP should assess the quality of the methods used by OFCCP to incorporate consideration of disparities by industry into its process for selecting contractor establishments for compliance evaluation. It should use the results of this assessment in finalizing its procedures for identifying contractor establishments at greatest risk of noncompliance. (Recommendation 4) The Director of OFCCP should evaluate the current approach used for identifying entities for compliance review and determine whether modifications are needed to reflect current workplace structures and locations or to ensure that subcontractors are included. (Recommendation 5) The Director of OFCCP should evaluate the Functional Affirmative Action Program to assess its usefulness as an effective alternative to an establishment-based program, and determine what improvements, if any, could be made to better encourage contractor participation. (Recommendation 6) Agency Comments and Our Evaluation We provided a draft of this report to the Departments of Labor (DOL), Commerce, the Equal Employment Opportunity Commission (EEOC) and the National Science Foundation (NSF). We received written comments from DOL that are reproduced in appendix V. In addition, DOL, Commerce, EEOC, and NSF provided technical comments which we incorporated into the report as appropriate. DOL agreed with 4 of the 5 recommendations we made to improve oversight of federal contractors, and identified some steps it plans to take to implement them. Specifically, the department agreed with our recommendations to analyze internal process data to better understand the cause of delays that occur during compliance evaluations, assess the quality of methods used to incorporate consideration of disparities by industry into the process to select contractors for review, and to evaluate its current approach to identifying entities for review in light of changes in workplace structures, as well as its Functional Affirmative Action Program. DOL stated that it appreciated, but neither agreed nor disagreed, with our recommendation to take steps toward requiring contractors to disaggregate demographic data for the purpose of setting placement goals in the AAP rather than setting a single goal for all minorities. The department said this would require a regulatory change with little immediate benefit as contractors are already required to collect demographic data on each employee and applicant, and must conduct in- depth analyses of their total employment processes to identify where impediments to equal opportunity exist. While we acknowledge these data collection requirements for federal contractors, we remain concerned that without requiring contractors to also establish placement goals to address any underrepresentation for specific minority groups, contractors may not develop objectives or targets to make affirmative action efforts work. We maintain, therefore, that DOL should take steps toward requiring contractors to develop placement goals disaggregated by race/ethnicity. EEOC provided us a memo that it characterized as technical comments on the draft report. In these comments, EEOC neither agreed nor disagreed with our recommendation to develop a timeline to complete its planned effort to clean IMS data for a one-year period, which would include adding missing industry codes, but stated that it was taking some actions to enhance these data. We continue to maintain a timeline should be developed to complete this review, which is needed for the commission to conduct industry sector analysis that could be used to more effectively focus its limited resources and outreach activities. EEOC also emphasized the importance of systemic investigations, noting that while outreach may be somewhat useful in generating charges, individual charges are unlikely to make a substantial impact on a systemic practice affecting an entire employment sector. We maintain that the ability to analyze IMS data by industry could help EEOC to focus its resource use, including for systemic investigations. EEOC also noted staffing and resource constraints as issues faced by the commission. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Labor, the Chair of the Equal Employment Opportunity Commission, the Secretary of Commerce, and the Director of the National Science Foundation. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff should have any questions about this report, please contact me at (202) 512-7215 or brownbarnesc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology Our two objectives were to: (1) identify the demographic trends in the technology workforce over the past 10 years, and (2) assess the efforts by the U.S. Equal Employment Opportunity Commission (EEOC) and the Department of Labor’s Office of Federal Contract Compliance Programs (OFCCP) to oversee technology companies and technology contractors’ compliance with equal employment opportunity and affirmative action requirements. This appendix provides details of the data sources used to answer these questions, the analyses we conducted, and any limitations we encountered. Definition of Technology Sector and Technology Occupations There is no commonly accepted definition of the technology sector or technology-oriented occupations. To arrive at our definition for the technology sector, we identified industries with the highest concentration of technology-oriented occupations, a similar approach to what other federal agencies have used recently to analyze trends within this sector. To identify technology-oriented occupations, we reviewed relevant research and interviewed researchers and other individuals knowledgeable about the technology sector. Based on this research, we defined technology-oriented occupations to include all computer, engineering and mathematical occupations, including managers. We selected our occupations using Bureau of Labor Statistics (BLS) Standard Occupational Classification (SOC) System codes, and crosswalked those occupations to the corresponding U.S. Census Bureau occupation codes to conduct our analysis. (For a complete list of the occupations we included as technology occupations, see appendix II). We defined the technology sector as a group of industries with the highest concentration of technology workers. Using data from the American Community Survey, an ongoing national survey conducted by the U.S. Census Bureau that collects information from a sample of households, we identified the 15 industries with the highest concentration of technology workers. For this analysis, we used Census industry codes since we used this dataset for many of our analyses. The concentration of technology workers in these industries ranged from a high of 62.2 percent in the computer systems design and related services industry to a low of 19.33 percent in the wired telecommunications carriers industry (see table 3). Companies in the technology sector also employ non-technical workers, such as sales people. We cross-walked the industries we identified in the American Community Survey with corresponding industry codes from the North American Industry Classification System (NAICS), which is the standard used by federal statistical agencies in classifying business establishments. The other data sets used in this review use NAICS codes to identify industry. The NAICS system has six levels of industry classification, with the smallest level (2-digit code) providing the most general industry classification, and the largest (6-digit) providing the most specific classification. In total, we identified 55 6-digit NAICS industry codes that comprise the technology sector using this method. (See appendix III for a list of the 6-digit NAICS codes and industry names that correspond to the Census industries we identified.) We compared our list of industries to those included in the 2016 reports by EEOC and the BLS on the technology sector. While each report includes a somewhat different set of industries depending on the authors’ particular definition of technology occupations, most of the 15 industries we selected overlap with industries selected in these other reviews. Stemming from their particular focus, these reports included some additional industries and/or occupations excluded from our analysis, such as those in the life sciences. We also compared our findings on the demographic trends in the technology workforce to 2016 EEOC and Census Bureau reports that reviewed diversity in the technology sector. Despite the definitional and methodological variations, the demographic trends found in these other reports were generally comparable to our findings. American Community Survey (ACS) Data To determine the demographic trends in the technology workforce over the past decade, we analyzed quantitative data on technology workers within and outside the technology sector from 2005 through 2015 from the Census Bureau’s Public Use Microdata Sample of the American Community Survey (ACS) for the years 2005, 2007, 2009, 2011, 2013, and 2015. ACS is an ongoing national survey that collects information from a sample of households. We analyzed trend data for gender, race, and ethnicity, and median salary by occupation and sector, and analyzed point-in-time data on educational background by occupation. We analyzed the percentage of technology workers who earned bachelor’s degrees in computer, engineering, mathematics, and technology fields. For median salary, we analyzed data for workers who were employed full-time, which included those who, over the past 12 months, reported usually working 35 hours or more per week and 50 weeks or more per year, and those with wages greater than zero. To account for the sample representation and design used in the ACS, we used the person weight present in the ACS data. We used the successive difference replication method to estimate the standard errors around any population estimate. For each comparison, we tested the statistical significance of the difference for men and women and for specific racial and ethnic groups at the p-value <0.05 level. In addition, we tested the statistical significance of the change between 2005 and 2015 for each gender and racial/ethnic group. For race categories using ACS data in this report, we included only non- Hispanic members of White, Black, Asian, and Other categories. For the Asian category, we included Asian American, Native Hawaiian or Other Pacific Islander. The Hispanic category incorporated Hispanics of all races. Our analysis included American Indian or Alaskan Native, and Two or More Races, in the category reported as “Other.” We assessed the reliability of the ACS generally and of data elements that were critical to our analyses and determined that they were sufficiently reliable for our analyses. Specifically, we reviewed documentation on the general design and methods of the ACS and on the specific elements of the ACS data that were used in our analysis. We interviewed Census Bureau officials knowledgeable about the ACS data and completed our own electronic data testing to assess the accuracy and completeness of the data used in our analyses. Employer Information Report (EEO-1) Data To determine workforce trends in companies within the technology sector and at leading information technology companies, we analyzed data from EEOC’s Employer Information Reports (EEO-1) for the years 2007, 2011, and 2015. We report EEO-1 data starting in 2007 because EEOC made significant changes to its requirements related to the reporting of EEO-1 data over time. For example, beginning in 2007, EEOC changed its requirements related to the reporting of data on managers and changed its practices for collecting certain racial/ethnicity information. EEO-1 reports contain firm-level data that is annually submitted to EEOC, generally by private-sector firms with at least 100 employees or federal contractors with at least 50 employees that have a contract, subcontract or purchase order amounting to $50,000 or more. Companies that fit the above criteria submit separate EEO-1 reports for their headquarters as well as each establishment facility. EEOC requires employers to use the North American Industry Classification System (NAICS) to classify their industry. To identify trends using EEO-1 data for workers, we analyzed data for companies with the NAICS codes we initially identified as technology industries. We selected the leading information technology companies using Standard & Poor’s (S&P) 500 Information Technology Index list, which identifies the largest public information technology companies at a given time. In October 2016, this list consisted of 67 companies in the world that have stocks trading with the United States, and we analyzed EEO-1 data from 65 of these companies. For both analyses, we analyzed EEO-1 data from all job categories by gender, race and ethnicity, and industry sectors. For job categories, the EEO-1 form collects data on 10 major job categories including 1) Executives, Senior Level Officials and Managers; 2) First/Mid-Level Officials and Managers; 3) Professionals; 4) Technicians; 5) Sales Workers; 6) Administrative Support Workers; 7) Craft Workers; 8) Operatives; 9) Laborers and Helpers; and 10) Service Workers. In our analysis, “all other jobs” combines sales workers, administrative support workers, craft workers, operatives, laborers and helpers, and service workers. We used the race/ethnicity categories used by the EEOC as follows: White, Black or African American, Asian (including Native Hawaiian or Other Pacific Islander), Hispanic or Latino, and “Two or more Races” (including American Indian or Alaska Native). We assessed the reliability of the EEO-1 data and determined that despite limitations, they were sufficiently reliable for our analyses. To determine the reliability of the EEO-1 data that we received from EEOC, we interviewed knowledgeable EEOC officials, reviewed relevant documents provided by agency officials and obtained on its website, and performed manual data testing for missing variables. Integrated Postsecondary Education Data System (IPEDS) For our analysis of technology degree earners, we used degree completion data tabulated by the National Science Foundation from the National Center for Education Statistics’ Integrated Postsecondary Education Data System (IPEDS) for the year 2014. Using a variety of sources, such as academic research and interviews with representatives from academia, we defined technology-related fields as degree programs in computer science, engineering, and mathematics. We analyzed IPEDS data by race and gender and who had obtained a bachelor’s or master’s degree in technology-related fields. We determined that the potential external candidates for technology positions generally had obtained either a bachelor’s or a master’s degree in a technology-related field. We used the race/ethnicity categories used by IPEDS as follows: White, Black, Asian (including Pacific Islander), Hispanic, and Multiracial or other (which includes American Indian or Alaska Native, Other or Unknown Race, and Two or more Races, i.e. respondents who selected one or more racial designations). Race and ethnicity breakouts are for U.S. citizens and permanent residents only, and thus do not include data on temporary residents. The analysis by gender includes temporary residents. To determine the reliability of IPEDs data, we reviewed relevant documents obtained on the National Center for Education Statistics website, such as annual methodology reports and the handbook of NCES survey methods. We determined that data from IPEDs were sufficiently reliable for our purposes. Analysis of EEOC and OFCCP Oversight To identify how EEOC and OFCCP have overseen technology companies’ compliance with federal equal opportunity and affirmative action requirements, we reviewed relevant federal statutes and regulations, EEOC and OFCCP policies, strategic planning documents, and operational manuals. We interviewed EEOC and OFCCP officials in headquarters, and in two regional locations selected based on the large proportion of technology companies in those areas. At EEOC, we met with officials from the San Francisco and New York district offices. At OFCCP, we met with officials from the Pacific and Northeast regional offices. To explore charges of discrimination filed with the EEOC against technology companies, we planned to analyze data from the EEOC Integrated Mission System (IMS), which contains records on EEOC charges and enforcement activities. However, since industry code is not a mandatory field for investigators to complete, roughly half the entries did not have an industry code. Therefore, we could not reliably identify technology companies that have faced charges or enforcement. We attempted to match information we had developed on federal technology contractors with charges filed in the IMS database. Depending on the matching method we used, this yielded very different results and we determined this was not a sufficiently reliable method. Further, any matching method we used would have excluded technology companies that did not hold a federal contract. To obtain information on evaluations of technology contractors completed by OFCCP and complaints received against technology contractors, we took a two-step approach. First, using the Federal Procurement Data System–Next Generation (FPDS-NG), we developed a list of company establishments and their subsidiaries that received federal contract obligations in fiscal years 2011-2015 under any of the 55 NAICS codes we included above as technology industries. We selected only company establishments that received 50 percent or more of their total federal contract obligations under these NAICS codes. Each establishment was counted only once regardless of how many federal contracts it received during the time period. Using this method, we identified 43,448 establishments in our pool of “technology contractors.” To identify subsidiaries, which are also subject to OFCCP requirements and evaluations, we identified any other establishments that shared the global vendor code with the contractors we identified, regardless of their NAICS code. This yielded 2,116 additional contractors. Second, we matched the names (removing suffixes) of the technology contractors and their subsidiaries that we identified in FPDS-NG against OFCCP’s data on their evaluations of contractors to identify the evaluations of technology contractors that OFCCP opened and completed from fiscal year 2011 through fiscal year 2016. We conducted a similar matching exercise to identify the complaints OFCCP received against technology companies. In addition, we identified which of the leading technology companies had completed evaluations between fiscal year 2011 through 2016. We obtained information during interviews with researchers, and representatives of workforce and industry organizations and associations. In addition, we interviewed diversity and compliance representatives of eight of the leading information technology companies located in the San Francisco Bay area which were also federal contractors to discuss their efforts to increase diversity and to gain their perspectives on the federal role in overseeing compliance with nondiscrimination laws. These companies were: Cisco Systems, Inc. Facebook, Inc. Google Inc. Hewlett Packard Enterprise Company Intuit Inc. Oracle America, Inc. Appendix II: Technology Occupations This is the list of technology occupations that we used in our analyses. We selected our occupations using Bureau of Labor Statistics (BLS) Standard Occupational Classification (SOC) System codes, and cross- walked those occupations to the corresponding U.S. Census Bureau occupation codes. Appendix III: North American Industry Classification System (NAICS) Codes Identified as Technology-Related Industries This is the list of the 55 6-digit North American Industry Classification System (NAICS) codes we identified as technology-related industries. To develop this list, we identified the 15 industries with the highest concentration of technology workers using U.S. Census Bureau industry codes and then used the U.S. Census Bureau’s 2012 Industry Code List for Household Surveys to crosswalk the Census codes with NAICS codes. Appendix V: Comments from the Department of Labor Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Betty Ward-Zukerman (Assistant Director), Kate Blumenreich (Analyst-in-Charge), Sheranda Campbell, Julianne Hartmann Cutts, Clarita Mrena, Moon Parks, Alexandra Rouse, and John Yee made significant contributions to all phases of the work. Also contributing to this report were Rachel Beers, James Bennett, Hedieh Fusfield, Julia Kennon, Jean McSween, Jessica Orr, Dae Park, James Rebbe, Almeta Spencer, and Alexandra Squitieri.
Why GAO Did This Study Technology companies are a major source of high-paying U.S. jobs, but some have questioned the sector's commitment to equal employment opportunity. EEOC provides federal oversight of nondiscrimination requirements by investigating charges of discrimination, and OFCCP enforces federal contractors' compliance with affirmative action requirements. GAO was asked to review workforce trends in the technology sector and federal oversight. This report examines (1) trends in the gender, racial, and ethnic composition of the technology sector workforce; and (2) EEOC and OFCCP oversight of technology companies' compliance with equal employment and affirmative action requirements. GAO analyzed workforce data from the American Community Survey for 2005-2015 and EEOC Employer Information Reports for 2007-2015, the latest data available during our analysis. GAO analyzed OFCCP data on compliance evaluations for fiscal years 2011-2016. GAO interviewed agency officials, researchers, and workforce, industry, and company representatives. What GAO Found The estimated percentage of minority technology workers increased from 2005 to 2015, but GAO found that no growth occurred for female and Black workers, whereas Asian and Hispanic workers made statistically significant increases (see figure). Further, female, Black, and Hispanic workers remain a smaller proportion of the technology workforce—mathematics, computing, and engineering occupations—compared to their representation in the general workforce. These groups have also been less represented among technology workers inside the technology sector than outside it. In contrast, Asian workers were more represented in these occupations than in the general workforce. Stakeholders and researchers GAO interviewed identified several factors that may have contributed to the lower representation of certain groups, such as fewer women and minorities graduating with technical degrees and company hiring and retention practices. Both the U.S. Equal Employment Opportunity Commission (EEOC) and the Department of Labor's Office of Federal Contract Compliance Programs (OFCCP) have taken steps to enforce equal employment and affirmative action requirements in the technology sector, but face limitations. While EEOC has identified barriers to recruitment and hiring in the technology sector as a strategic priority, when EEOC conducts investigations, it does not systematically record the type of industry, therefore limiting sector-related analyses to help focus its efforts. EEOC has plans to determine how to add missing industry codes but has not set a timeframe to do this. In addition, OFCCP's regulations may hinder its ability to enforce contractors' compliance because OFCCP directs contractors to set placement goals for all minorities as a group rather than for specific racial/ethnic groups. OFCCP also has not made changes to its establishment-based approach to selecting entities for review in decades, even though changes have occurred in how workplaces are structured. Without taking steps to address these issues, OFCCP may miss opportunities to hold contractors responsible for complying with affirmative action and nondiscrimination requirements. What GAO Recommends GAO makes 6 recommendations, including that EEOC develop a timeline to improve industry data collection and OFCCP take steps toward requiring more specific minority placement goals by contractors and assess key aspects of its selection approach. EEOC neither agreed nor disagreed with its recommendation, and OFCCP stated the need for regulatory change to alter placement goal requirements. GAO continues to believe actions are needed, as discussed in the report.
gao_GAO-18-534
gao_GAO-18-534_0
Background DME Items Subject to Adjusted FFS Payment Rates CMS used payment information from the CBP to adjust payment rates for 393 Healthcare Common Procedure Coding System (HCPCS) codes (generally referred to as “items” in this report) in non-bid areas. Most of these items were included in at least one CBP round; however, some are no longer included in current CBP rounds. For example, 81 items with adjusted rates were not included in the CBP rounds that were in effect at the end of calendar year 2016. CMS grouped the 393 items with adjusted rates into 11 general product categories. See table 1 for these categories and the number of items in each category. CMS’s Methodologies for Adjusting FFS Payment Rates in Non-Bid Areas Using CBP Information CMS uses different methodologies to adjust FFS payment rates in non- bid areas. These adjustments are based on CBP payment information depending on the number of CBP areas in which a particular item has been competitively bid and the geographic area in which the adjusted rate is applied. For example, for an item that is competitively bid in more than 10 CBP areas and is furnished to beneficiaries residing in non-rural areas of the contiguous United States, CMS calculates a separate adjusted rate for each of eight geographic regions. In each region, the item’s average regional adjusted rate reflects the unweighted average competitively bid rate for all CBP areas located fully or partially within the region. To address concerns regarding the possible effect of adjusted rates on beneficiaries residing in rural areas, CMS may apply an additional premium to the adjusted rates for items furnished to beneficiaries residing in rural areas of the contiguous United States. Similarly, CMS may also apply a premium in non-contiguous areas of the United States—Alaska, Hawaii, and the U.S. territories—that applies to non-rural and rural areas alike. See figure 1 for a map of CBP and non-bid areas as of 2016. Phase-In of Adjusted FFS Payment Rates According to CMS, it initially used a phased-in approach to adjust FFS payment rates beginning in 2016; this allowed for a transition period in which the agency could closely monitor health outcomes and access to affected DME items prior to implementing fully adjusted rates. From January 1 through June 30 of 2016, FFS payment rates were based on a 50/50 blend of non-adjusted and adjusted rates, and from July 1 through December 31 of the same year, FFS payment rates were 100 percent adjusted based on CBP information. However, the 21st Century Cures Act required CMS to retroactively apply the 50/50 blended payment rates to claims in the second half of 2016, delaying the fully adjusted payment rates to January 1, 2017. Because the retroactively applied 50/50 blended rates were based on newly available information from the CBP round 2 recompete that went into effect on July 1, 2016, the adjusted rates for the second half of 2016 may have differed from the adjusted rates for the first half of 2016. CMS contractors retroactively adjusted claims for this period, which were processed during the second half of calendar year 2017. Because this rate change became effective mid- December 2016, most decisions by suppliers and beneficiaries during the second half of 2016 were made based on the 100 percent adjusted rates, and the retroactive adjustments affected the total allowed charges, or expenditures, that suppliers were reimbursed. The implementation of adjusted rates may also affect other populations in addition to Medicare suppliers and Medicare beneficiaries in non-bid areas, because some private and other government insurers base their payment rates on Medicare’s fee schedule. For example, the federal government’s TRICARE military health program uses Medicare’s fee schedule to help determine how much it pays for DME items. DME Supplier Requirements CMS has established certain requirements that all DME suppliers must meet in order to enroll in Medicare and maintain Medicare billing privileges, which include accreditation and appropriate licensure. Specifically, DME suppliers must meet Medicare enrollment and quality standards. CMS also requires all DME suppliers and each of their locations to be accredited by a CMS-approved accrediting organization. In addition, DME suppliers must meet state licensure requirements in order to furnish certain items or services. Finally, certain DME suppliers are required to post a surety bond of at least $50,000 for each business location. There are two key differences between supplier requirements in non-bid areas versus CBP areas. First, only suppliers who are awarded a contract—referred to as contract suppliers—can furnish certain DME items at competitively determined prices to Medicare beneficiaries residing in CBP areas, and they are contractually obligated to furnish items in their contract upon request. According to CMS’s competitive acquisition ombudsman, contract suppliers in CBP areas may receive more scrutiny than DME suppliers in non-bid areas because CMS can take action to ensure the suppliers are meeting their contract obligations. However, in non-bid areas, any Medicare-enrolled DME supplier can furnish DME items. DME suppliers do not sign contracts in non-bid areas and are not contractually obligated to furnish items upon request. Second, contract suppliers in CBP areas must accept Medicare assignment, meaning that they must accept the competitively determined Medicare payment rate in full (and may not charge beneficiaries more than any unmet deductible and 20 percent coinsurance), whereas suppliers in non- bid areas may choose not to accept assignment and there is no limit on the amount they may charge a beneficiary. CMS’s Monitoring Activities CMS has implemented several activities to monitor whether beneficiary access has been affected by the implementation of adjusted rates in non- bid areas, as summarized below. Inquiries to 1-800-MEDICARE. Beneficiaries with DME questions— referred to by CMS as inquiries—are directed to call CMS’s 1-800- MEDICARE call line. Callers are assisted by customer service representatives trained to answer questions and assist beneficiaries in finding DME suppliers. One CMS official told us the agency tracks DME-related inquiries to 1-800-MEDICARE but does not track whether inquiries are received from beneficiaries in CBP areas versus non-bid areas. Health Status Monitoring Tool. CMS analyzes Medicare claims data to monitor real-time health outcomes, such as death, hospitalizations, emergency room visits, and physician visits for beneficiaries in both CBP and non-bid areas. CMS posts information on its website to show historical and regional trends in health outcomes for specific groups of beneficiaries. Monitoring Changes in the Number of Suppliers and Beneficiary Utilization Rates. CMS officials told us they closely monitor changes in the number of suppliers furnishing items subject to adjusted rates in non-bid areas as well as changes in beneficiary utilization of rate- adjusted items. Monitoring Assignment Rates. CMS monitors the percentage of claims suppliers have submitted as “assigned” in non-bid areas. According to CMS, assignment rates are a good indicator of whether FFS payment amounts are sufficient. While CMS conducted beneficiary satisfaction surveys before and after the implementation of previous CBP rounds in order to measure changes in beneficiary satisfaction in CBP areas, CMS officials reported they have not conducted similar surveys of beneficiaries residing in non-bid areas. Payment Rate Reductions Were Generally Significant but Varied, and Number of Suppliers Continued a Trend of Annual Decreases FFS Payment Rate Reductions Were Generally Significant but Varied By Product Category and DME Item The payment rate reductions for DME items in non-bid areas were generally significant. The average unweighted percentage reduction across the top product category items combined—measured by calculating the percentage change between the 2015 non-adjusted and the 2017 fully adjusted rates—was 46 percent. However, payment rate reductions varied by DME product category and by individual item within product categories. This is not unexpected given that the adjusted rates for each item were based on competitively determined payment rates from prior or current CBP rounds, and rate reductions for those payment rates also varied widely by product category and item. Specifically, average payment rate reductions by DME product category ranged from 18 percent to 74 percent with a midpoint of 47 percent. For example, the average payment rate reduction for the top items in the oxygen product category—the category that accounted for the highest percentage of total expenditures in 2016—was 39 percent. The range of reductions among individual items within product categories also varied. For example, payment rate reductions for the top items in the enteral nutrients product category ranged from 46 percent to 56 percent. In contrast, payment rate reductions for the three items in the negative pressure wound therapy (NPWT) product category ranged from 6 percent to 61 percent. (See table 2.) Table 3 shows 2015 non-adjusted and 2017 fully adjusted rates and the percentage reduction in these rates for the rate-adjusted item in each product category with the largest share of 2016 total expenditures. (See Appendix II for detailed information on the 2015 non-adjusted payment rates, 2016 transitional 50/50 blended adjusted rates, and 2017 fully adjusted rates for items with the highest 2016 expenditures in each product category.) In 2016, Number of Suppliers Furnishing Rate- Adjusted Items in Non-Bid Areas Continued a Trend of Annual Decreases The number of suppliers furnishing any of the 393 rate-adjusted items to beneficiaries in non-bid areas in 2016—the first year that CMS adjusted payment rates in non-bid areas—decreased 8 percent compared to 2015. This continued a trend of annual decreases in non-bid areas going back to at least 2011—the first year CMS began implementing the CBP in nine areas. The largest percentage decrease in suppliers, 13 percent, occurred in 2014 (the year after the CBP was expanded to an additional 100 areas), followed by 9 and 8 percent decreases in 2015 and 2016, respectively. This information is based on our review of the number of suppliers billing Medicare, so it is unclear as to how much the decreases were attributable to suppliers closing their businesses, conducting mergers or acquisitions, no longer accepting Medicare beneficiaries, or other factors. Also, the number of suppliers furnishing non-adjusted items to beneficiaries residing in non-bid areas decreased 4 percent in 2016 compared to 2015. Similar to trends found for rate-adjusted items, this continued a trend of annual decreases since at least 2011, although these decreases were smaller. As was the case with rate-adjusted items, the largest percentage decrease in the number of suppliers occurred in 2014 and then slowed in subsequent years. (See fig. 2.) Because 2016 was the most recent year of complete Medicare claims data available at the time of our study, we could only review data for the first year that adjusted rates were in effect in non-bid areas and could not determine if these trends continued in 2017. Some DME industry trade organization representatives we interviewed reported that suppliers face an additional challenge of having to travel long distances when furnishing items to beneficiaries in rural areas, which may result in suppliers limiting their service areas. However, there was little difference between non-rural and rural non-bid areas in terms of changes in the number of suppliers between 2015 and 2016. For example, the number of suppliers furnishing rate-adjusted items to beneficiaries residing in non-rural non-bid areas decreased 7 percent between 2015 and 2016 compared with a decrease of 8 percent in rural non-bid areas. (See fig. 3.) There was also little difference between non-rural and rural areas in terms of changes in the number of suppliers who furnished non-adjusted items to beneficiaries residing in non-bid areas. For example, between 2015 and 2016 the number of suppliers furnishing non-adjusted items to beneficiaries in non-bid areas decreased 3 percent in non-rural areas and 4 percent in rural areas. We found that the number of suppliers furnishing rate-adjusted items in non-bid areas decreased between 2015 and 2016 in all product categories, though the extent of these decreases varied. For example, we found that the number of suppliers furnishing items in the infusion pumps product category decreased by 1 percent between 2015 and in 2016 while the number of suppliers furnishing general home equipment decreased by 10 percent. Trends for 2010 through 2016 were generally similar. The number of suppliers decreased in all product categories, and the extent of decreases varied. Individual suppliers may furnish items across multiple product categories. (See fig. 4.) Beneficiary Utilization of Rate-Adjusted Items Held Steady in 2016 Following Three Years of Decreases The number of beneficiaries in non-bid areas receiving at least one rate- adjusted item in 2016—the first year that CMS implemented adjusted rates in non-bid areas—showed little change compared to 2015, decreasing by less than one-half of a percentage point. This stabilization in beneficiary utilization occurred following three years of decreases in non-bid areas with the largest decrease (4 percent) in 2014—the year following the CBP’s expansion to an additional 100 areas. In comparison, the number of beneficiaries in non-bid areas who received at least one non-adjusted item increased 3 percent in 2016. (See fig. 5.) In general, the annual trends in CBP areas paralleled those in non-bid areas. Between 2015 and 2016, there was little change in the number of beneficiaries in CBP areas who received at least one rate-adjusted item, with a decrease of less than one-half a percentage point. In non-bid areas, there was little difference between non-rural and rural areas in terms of changes in 2016 in the number of beneficiaries who received rate-adjusted items, with decreases in both of less than one-half a percentage point. There was also little difference in terms of the changes in the number of beneficiaries in non-bid areas who received non-adjusted items. The total decrease for the 2010 to 2016 period was smaller in non-rural areas than rural areas. (See fig. 6.) We found that the number of beneficiaries in non-bid areas receiving at least one rate-adjusted item decreased in 2016 for 9 of the 11 product categories. Changes ranged from a 45 percent decrease for the TENS product category to a 9 percent increase for the CPAP/RAD product category. For the 2010 through 2016 period, most product categories also had total net percentage decreases, and percentage changes varied across product categories. (See fig. 7.) Individual product category decreases were generally larger in CBP areas than in non-bid areas. For example, between 2010 and 2016, the percentage change in the number of beneficiaries who received oxygen product category items was -29 percent in CBP areas as compared to -19 percent in non-bid areas. CPAP/RAD was the one product category for which the number of beneficiaries receiving at least one item increased rather than decreased in 2016 and between 2010 and 2016 in both non-bid and CBP areas. This is consistent with what we have previously reported. We could only report on utilization for one year following adjustment of rates because 2016 was the most recent year with complete data available; as such utilization trends may differ in 2017 and subsequent years. Available Evidence Indicates No Widespread Access Issues in the First Year of Reduced Durable Medical Equipment Payment Rates in Non-Bid Areas CMS’s Health Status Monitoring Tool Indicates that Beneficiaries in Non- Bid Areas Have Not Experienced Changes in Health Outcomes CMS has reported that data from its health status monitoring tool indicate the reduced payment rates have not resulted in changes in access to DME items or health outcomes in non-bid areas in 2016 as compared to 2015. CMS uses the health status monitoring tool to analyze Medicare claims data and track seven health outcomes—deaths, hospitalizations, emergency room visits, physician visits, admissions to skilled nursing facilities, average number of days spent hospitalized in a month, and average number of days in a skilled nursing facility in a month—for beneficiaries in both CBP and non-bid areas. The data for non-bid areas are broken out by rural and non-rural areas across eight different regions of the country and non-contiguous U.S. areas. CMS monitors these health outcomes for three Medicare FFS beneficiary groups: 1) all beneficiaries enrolled in FFS, 2) beneficiaries who are likely to use one of the rate-adjusted items on the basis of related health conditions, and 3) beneficiaries who have a claim for one of the rate-adjusted items. CMS’s tool considers historical and regional trends in health status to monitor health outcomes in all CBP and non-bid areas. CMS officials told us that staff meet bi-weekly to review monitoring tool trends as well as external complaints or stakeholder feedback to identify and investigate potential DME access issues. The officials told us these investigations have not identified any adverse health outcomes as a result of the implementation of adjusted rates. We previously conducted an analysis of CMS’s methodologies and scoring algorithm that focused on evaluating health outcome trends in CBP areas and found them to be generally sound. CMS officials told us they have not made significant revisions to the tool’s underlying methodologies but did create a separate workbook specially tailored to the implementation of the adjusted rates in non-bid areas that includes additional capabilities, such as review of assignment rates. In addition, because CMS uses a 4-month window to evaluate health outcomes of all beneficiaries that meet the criteria, for this report we also conducted our own analysis of health outcomes over a longer period of time to determine if our results for a particular set of beneficiaries were consistent with CMS’s shorter-term results. Specifically, we tracked a cohort of about 256,000 beneficiaries in both non-bid and CBP areas who began using oxygen items in the first half of 2014 and followed their utilization through the end of 2016 to determine if mortality and hospital admissions rates remained consistent before and after the implementation of adjusted rates. We found that the trends in mortality and hospital admissions rates for this cohort were generally consistent with the cumulative trends displayed in CMS’s monitoring tool. We did not find a change in health status between 2015 and 2016 related to the reduced payment rates. The Percentage of Medicare Enrolled Participating Suppliers and Rates of Assignment for Rate-Adjusted Items Did Not Change Following the Implementation of Adjusted Rates One way that CMS verifies that beneficiaries have access to needed items and services is by reviewing the percentage of suppliers who enroll as Medicare “participating” suppliers and the percentage of claims that suppliers have submitted as assigned. Participating suppliers must accept the FFS payment rate in full for all claims and cannot charge beneficiaries an additional amount above the 20 percent copayment. DME suppliers can also elect to be “non-participating” suppliers meaning they can choose to accept assignment on a claim-by-claim basis and there is no limit on the amount that they can charge for a DME item. Non- participating suppliers in non-bid areas are not required to accept assignment of Medicare claims. This means a non-participating supplier can decide not to accept assignment for an item and can charge beneficiaries an amount above the Medicare payment rate. CMS told us the rate of participating suppliers in 2016 was unchanged from 2015 and decreased by one percent in 2017, and the rates of assignment for rate- adjusted items remained very high (over 99 percent of all claims for rate- adjusted items in non-bid areas) in 2016 and 2017. Number of Inquiries to CMS and the State Health Insurance Assistance Program Did Not Increase Following the Implementation of Adjusted Rates CMS told us the nationwide number of inquiries to 1-800-MEDICARE associated with access issues did not increase after the implementation of adjusted rates. According to a CMS official, CMS uses the same process for all DME calls received, regardless of whether the caller lives in a CBP or non-bid area, so there is no way to distinguish DME-related calls in CBP areas from non-bid areas. However, the CMS official said there has been no evidence of systemic access issues in non-bid areas, such as beneficiaries reporting they were not able to find suppliers to furnish DME items with adjusted rates. We spoke with officials from three of CMS’s regional offices, who also reported there has not been an increase in the number of DME-related inquiries since adjusted rates in non-bid areas went into effect. One of the officials told us that her regional office is forwarded information about all inquiries related to Medicare Parts A and B from the other CMS regional offices. She also said the regional offices generally receive direct inquiries from a variety of sources including beneficiaries, beneficiary advocates, local partners, congressional district offices, and providers, and some are also escalated by 1-800-MEDICARE customer service representatives. According to that official, each year regional offices receive close to 40,000 inquiries nationwide regarding a wide range of DME issues, and most are related to questions about coverage and documentation requirements (such as what types of DME may require additional documentation or face-to-face visits with physicians). In addition, the official told us that regional offices capture detailed information about each inquiry. This includes contact information for the individual submitting the inquiry, the type of DME involved and whether it is included in the CBP, and the regional office’s response. Officials said they review this information to specifically look for access issues or trends by product category but have not identified any issues. One official said she had heard anecdotal reports of beneficiaries contacting regional offices claiming they had experienced access issues, but such reports did not indicate these issues were widespread or sustained. We also interviewed representatives from the State Health Insurance Assistance Program who reported there has not been an increase in requests for assistance with DME-related issues since the adjusted rates went into effect. The representatives told us State Health Insurance Assistance Program counselors log all contacts, but the data do not distinguish between non-bid and CBP areas. However, they said counselors have received about 300 to 500 DME-related contacts each quarter since 2015, and the number of requests for assistance with DME- related issues remained consistent before and after adjusted rates went into effect. State Health Insurance Assistance Program representatives said counselors attempt to resolve issues on their own, but can also contact CMS’s regional offices for assistance. Several Stakeholder Groups Reported Anecdotal Examples of Specific Beneficiary Access Concerns, But Did Not Have Evidence That Issues Were Widespread We interviewed representatives from one state hospital association, three beneficiary advocacy groups, and four DME industry trade organizations who provided anecdotal examples of varying degrees of beneficiary access issues in non-bid areas. For example, representatives from the state hospital association told us some hospital case managers in non-bid areas have reported difficulty in locating suppliers to provide DME items such as wheelchairs or walkers, but these issues are not widespread. A representative from one beneficiary advocacy group told us her organization does not receive many direct inquiries from Medicare beneficiaries in regard to access issues to DME, but it has been contacted by entities such as hospital discharge planners and pharmacies regarding issues with delivery of DME items. For example, the representative said some hospital discharge planners have reported that DME suppliers are more resistant to delivering DME items, such as wheelchairs and walkers, to the hospital when the beneficiary resides in a non-bid area as opposed to a CBP area. However, the representative said such reports are anecdotal and she does not think that issues reported are widespread or have created significant hardship. She added that her organization makes webinars available on a fairly regular basis, and very few people signed up for the DME webinar, which was not the case for webinars held for other topics. In contrast, a representative of another beneficiary advocacy group that focuses on a condition in which beneficiaries would typically use oxygen items with adjusted rates told us that without a real research instrument, it is difficult to determine if the increase in complaints that her group began receiving in 2016 from beneficiaries in non-bid areas is directly related to the adjusted rates, but she said she believes they are because she had not heard certain types of complaints before the adjusted rates went into effect. For example, she said the beneficiary advocacy group has received complaints about reduced delivery services and reductions in the number of portable oxygen tanks that DME suppliers are willing to furnish in a single delivery and these complaints are more frequent from beneficiaries who live in rural areas. The representative said given that rural areas may have higher delivery costs, it is not surprising that some suppliers may have decreased the number of deliveries, but she was surprised to hear they have decreased the number of portable oxygen tanks they are willing to provide. According to CMS, the agency encourages individuals to report any supplier that delivers fewer tanks of oxygen than a beneficiary needs to CMS, so this violation can be immediately addressed. Representatives from four DME industry trade organizations that we spoke with told us the implementation of adjusted rates has caused some suppliers to change their business models and practices. Specifically, individuals from all four DME industry trade organizations told us DME companies have lowered costs by reducing their number of employees, decreasing their service areas, or consolidating deliveries in specific areas to only certain days. For example, several DME suppliers told us that since the implementation of adjusted rates, they will only service beneficiaries who reside within the city limits or within a certain number of miles from their locations. Several DME suppliers told us the quality and range of items provided by DME suppliers in non-bid areas has changed since the adjusted rates went into effect. For example, several suppliers reported they provide cheaper, lower quality items and that some suppliers will no longer provide liquid oxygen to Medicare beneficiaries. In addition, individuals from all four DME industry trade organizations also told us there have been delays in hospital discharges as a result of not being able to find a DME supplier to provide needed DME. In contrast, CMS officials told us they investigated reported concerns about delayed patient discharges because of difficulties in acquiring rate-adjusted items and found there has not been a noticeable change in the average length of hospital stay before and after the implementation of adjusted rates. Specifically, CMS officials told us they measured: 1) average length of hospital stay for beneficiaries who received new rate-adjusted items shortly after their discharge, 2) whether beneficiaries were being discharged prior to receiving new rate-adjusted items, and 3) average length of stay for beneficiaries in individual access groups whether or not they received rate-adjusted items after being discharged. According to CMS, results of this analysis indicated no apparent changes in the average length of hospital stay after adjusted rates were implemented. In addition to speaking with these representatives, we also reviewed several publicly released studies that assessed the effect of the implementation of adjusted rates on beneficiaries, DME suppliers, and others. We found these studies did not provide persuasive evidence of substantial effects, primarily because of methodological issues with how the participants in the studies were recruited. Specifically, respondents were recruited on social media platforms or through targeted email notifications, raising concerns about selection bias. Although the number of DME suppliers and beneficiary utilization of DME items have decreased throughout the past several years, available evidence indicates there were not widespread beneficiary access issues in 2016. According to CMS officials, the long-term decreases in utilization do not necessarily indicate that beneficiaries did not receive needed DME, and suggested instead that these decreases are the result of a decline in unnecessary utilization. However, some stakeholders we interviewed continued to express concerns that lower FFS payment rates may have made it more difficult for some beneficiaries to receive needed DME, and one DME trade organization told us some decreases in utilization could be attributed to beneficiaries opting to pay for items outright rather than going through Medicare. Because there is only limited experience on changes in the number of DME suppliers and utilization of DME based on the first year that adjusted rates have been in effect, some effects may take longer to appear, and it is possible that trends could differ in 2017 or subsequent years. This underscores the importance of CMS’s continued monitoring activities. Agency Comments We provided a draft of this report to HHS for comment. HHS provided technical comments, which were incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Health and Human Services and appropriate congressional committees. The report will also be available at no charge on our website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or clowers@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: The Centers for Medicare & Medicaid Services’ Phase-In of the Competitive Bidding Program and Other Antifraud Initiatives, 2008 through 2019 The Centers for Medicare & Medicaid Services (CMS) has implemented several antifraud efforts that affect durable medical equipment (DME) suppliers. Specifically, CMS began phasing in DME competitive bidding program (CBP) rounds in 2008. (See fig. 8.) In addition to the CBP, CMS has also implemented several other broader initiatives. (See fig. 9.) Appendix II: Medicare Fee-for-Service (FFS) Payment Rates for Top Expenditure Items in Each Durable Medical Equipment (DME) Product Category, 2015 to 2017 Table 4 includes the top five Healthcare Common Procedure Coding System (HCPCS) codes for each product category based on the percentage of 2016 total expenditures for items included in the competitive bidding program (CBP) and subject to adjusted rates in non- bid areas. Combined, these items account for 80 percent of 2016 total expenditures across all 393 rate-adjusted items. Appendix III: GAO Contact and Staff Acknowledgments GAO Contact A. Nicole Clowers, (202) 512-7114 or clowersa@gao.gov. Staff Acknowledgments In addition to the contact named above, Kathleen M. King, Director; Martin T. Gahart, Assistant Director; Michelle Paluga, Analyst-in-Charge; Sam Amrhein; Todd Anderson; Barbara Hansen; and Emily Wilson made key contributions to this report. Related GAO Products Medicare: CMS’s Round 2 Durable Medical Equipment and National Mail- order Diabetes Testing Supplies Competitive Bidding Programs. GAO-16-570. Washington, D.C.: September 15, 2016. Medicare: Utilization and Expenditures for Complex Wheelchair Accessories. GAO-16-640R. Washington, D.C.: June 1, 2016. Medicare: Bidding Results from CMS’s Durable Medical Equipment Competitive Bidding Program. GAO-15-63. Washington, D.C.: November 7, 2014. Medicare: Second Year Update for CMS’s Durable Medical Equipment Competitive Bidding Program Round 1 Rebid. GAO-14-156. Washington, D.C.: March 7, 2014 Medicare: Review of the First Year of CMS’s Durable Medical Equipment Competitive Bidding Program’s Round 1 Rebid. GAO-12-693. Washington, D.C.: May 9, 2012. Medicare: The First Year of the Durable Medical Equipment Competitive Bidding Program Round 1 Rebid. GAO-12-733T. Washington, D.C.: May 9, 2012. Medicare: Issues for Manufacturer-level Competitive Bidding for Durable Medical Equipment. GAO-11-337R. Washington, D.C.: May 31, 2011. Medicare: CMS Has Addressed Some Implementation Problems from Round 1 of the Durable Medical Equipment Competitive Bidding Program for the Round 1 Rebid, GAO-10-1057T. Washington, D.C.: September 15, 2010. Medicare: CMS Working to Address Problems from Round 1 of the Durable Medical Equipment Competitive Bidding Program. GAO-10-27. Washington, D.C.: November 6, 2009. Medicare: Covert Testing Exposes Weaknesses in the Durable Medical Equipment Supplier Screening Process. GAO-08-955. Washington, D.C.: July 3, 2008. Medicare: Competitive Bidding for Medical Equipment and Supplies Could Reduce Program Payments, but Adequate Oversight Is Critical. GAO-08-767T. Washington, D.C.: May 6, 2008. Medicare: Improvements Needed to Address Improper Payments for Medical Equipment and Supplies. GAO-07-59. Washington, D.C.: January 31, 2007. Medicare Durable Medical Equipment: Class III Devices Do Not Warrant a Distinct Annual Payment Update. GAO-06-62. Washington, D.C.: March 1, 2006. Medicare: More Effective Screening and Stronger Enrollment Standards Needed for Medical Equipment Suppliers. GAO-05-656. Washington, D.C.: September 22, 2005. Medicare: CMS’s Program Safeguards Did Not Deter Growth in Spending for Power Wheelchairs. GAO-05-43. Washington, D.C.: November 17, 2004. Medicare: Past Experience Can Guide Future Competitive Bidding for Medical Equipment and Supplies. GAO-04-765. Washington, D.C.: September 7, 2004.
Why GAO Did This Study To achieve Medicare DME savings, Congress required CMS to implement a CBP in certain geographic areas for certain DME items. Beginning in 2011, CMS began implementing the CBP in several phases. The agency estimates that the CBP will save the Medicare program $19.7 billion between 2013 and 2022.The Patient Protection and Affordable Care Act required CMS to use CBP information to adjust fee-for-service payment rates for certain DME items in non-bid areas. On January 1, 2016, adjusted rates for 393 items went into effect in non-bid areas. CMS estimated these adjustments will save the Medicare program about $3.6 billion between fiscal years 2016 and 2020. GAO was asked to review the potential effects of reduced payment rates for DME in non-bid areas. In this report, GAO examines (1) payment rate reductions and any changes in the number of suppliers; (2) any changes in the utilization of rate-adjusted items; and (3) available evidence related to potential changes in beneficiaries' access to rate-adjusted items. GAO compared non-adjusted 2015 fee-for-service payment rates to adjusted 2016 and 2017 rates and reviewed Medicare claims data from 2010 through 2016. GAO also reviewed CMS's monitoring activities and interviewed CMS officials. In addition, GAO interviewed select beneficiary advocacy groups and DME industry trade organizations. What GAO Found The Centers for Medicare & Medicaid Services (CMS) implemented a competitive bidding program (CBP) for certain durable medical equipment (DME), such as wheelchairs and oxygen, in 2011 that is currently operating in 130 designated U.S. areas. On January 1, 2016, CMS used information from the CBP to start adjusting Medicare fee-for-service payment rates for certain DME throughout the country in areas that had previously not been subject to the CBP (known as non-bid areas). For the first year adjusted rates were in effect in non-bid areas, GAO found: Reductions in payment rates were generally significant but varied by category of DME item. The unweighted average reduction in payment rates for the five rate-adjusted DME items with the highest expenditures in 2016 within each DME category was 46 percent. Changes in the number of suppliers furnishing rate-adjusted items were generally consistent with the years before adjusted rates went into effect. GAO found that the number of suppliers furnishing rate-adjusted items in non-bid areas in 2016 decreased 8 percent compared to 2015. GAO's review of Medicare claims data found that beneficiary utilization of rate-adjusted items in non-bid areas in 2016 showed little change compared to 2015. GAO also found that CMS's activities to monitor beneficiary access, including changes in health outcomes, showed little change between 2015 and 2016. GAO interviewed several stakeholder groups that reported anecdotal examples of specific beneficiary access concerns they attributed to the rate adjustments, but stakeholders could not provide evidence to substantiate that the access issues were widespread. GAO's findings are consistent with CMS's monitoring results, which indicate that there were no widespread effects on beneficiary access in the year after the adjusted rates went into effect. However, some effects may take longer to appear, underscoring the importance of CMS's continued monitoring activities. The Department of Health and Human Services provided technical comments on a draft of this report, which GAO incorporated as appropriate.
gao_GAO-18-564
gao_GAO-18-564_0
Background State Expenditure Reporting In order to receive federal matching funds, states report expenditures quarterly to CMS on the CMS-64. States are required to report their expenditures to CMS within 30 days of the end of each quarter, but may adjust their past reporting for up to 2 years after the expenditure was made, referred to as the 2-year filing limit. Adjustments can reflect resolved disputes or reclassifications of expenditures. Expenditures reported after the 2-year filing limit are generally not eligible for a federal match, with certain exceptions. The CMS-64 is a series of forms that capture expenditure data for different aspects of states’ Medicaid programs, such as different types of services, populations, and different federal matching rates. (See table 1 for examples of the expenditure types captured by the CMS-64.) States report their expenditures quarterly on the CMS-64 at an aggregate level— such as a state’s total expenditures for such categories of services as inpatient hospital services—and these reported expenditures are not linked to individual enrollees or services. States’ reporting may vary depending on the features of their Medicaid program. Some examples of this variation include the following: States that expanded eligibility under PPACA would need to report expenditures not only by the type of services (e.g., inpatient hospital services), but also by populations receiving different federal matching rates, such as expansion enrollees. States with waivers—that is, where the state received approval from HHS to waive certain Medicaid requirements in order to test and evaluate new approaches for delivering and financing care under a demonstration—would need to report those expenditures associated with these waivers on additional forms. CMS Oversight of State Expenditure Reporting CMS is responsible for assuring that expenditures reported by states are supported and allowable, meaning that the state actually made and recorded the expenditure and that the expenditure is consistent with Medicaid requirements. CMS regional offices perform the ongoing oversight, with enhanced oversight procedures in the 20 states with the highest federal Medicaid expenditures. (See fig.1) CMS is required to review the expenditures reported by states each quarter. (See fig. 2.) Regional office reviewers have 50 days to review the expenditures and compute the federal share of states’ Medicaid expenditures. As part of the quarterly review, regional office reviewers also check that expenditures receive the correct matching rate. In general, the amount of federal funds that states receive for Medicaid services is determined annually by a statutory formula—the Federal Medical Assistance Percentage (FMAP), which results in a specific federal matching rate for each state. However, there are a number of exceptions where higher federal matching rates can apply for certain types of beneficiaries, services, or administrative costs. See table 2 for examples of higher matching rates that apply for expenditures for certain types of enrollees, services, or administrative costs. When CMS identifies questionable expenditures or errors through its reviews, there are several ways that they can be resolved, as summarized below. Deferral of federal funds. CMS can defer federal matching funds if, during the quarterly review, the regional office reviewer needs additional information to determine whether a particular expenditure is allowable. The reviewer may recommend that CMS defer the expenditure until the state provides additional support or corrects the reporting. State reducing reported expenditures. If the state agrees that the questionable expenditure is an error, the state can submit an adjusted report during the quarterly review or make an adjustment in a subsequent quarter. These adjustments prevent federal payments for those expenditures. Disallowance of expenditure. If CMS determines an expenditure is not allowable, CMS can issue a disallowance, and the state returns federal funds through reductions in future federal allocations. States may appeal disallowances. CMS Has Processes in Place to Assure that State-Reported Medicaid Expenditures Are Supported and Allowable, but Weaknesses Limit Its Ability to Effectively Target Risk CMS uses a variety of processes to assure that state-reported expenditures are supported during quarterly reviews and performs focused financial management reviews on expenditures considered at risk of not complying with Medicaid requirements. Although we found that CMS was identifying errors and compliance issues using both review methods, we also found weaknesses in how CMS targets its oversight resources to address risks. CMS Uses Quarterly Reviews, Supplemented with More Focused Reviews, to Assure that Reported Expenditures Are Supported and Allowable and Has Detected Errors in the Process CMS uses quarterly reviews to assess whether expenditures are supported by the state’s accounting systems; are in accordance with CMS approved methodologies, plans, and spending caps; and whether there are significant unexplained variances—changes in expenditures— from one quarter to the next (referred to as a variance analysis). CMS review procedures include validation measures that check to ensure that expenditures were reported within the 2-year limit, which is a check done on all types of expenditures. Another validation measure compares expenditures to various approval documents. For example, when a state has a waiver in place, expenditures are reviewed against waiver agreements that authorize payment for specified services or populations. Other examples include comparing supplemental payment expenditures to caps set for those expenditures. (See table 3.) Our examination of the quarterly reviews indicated that the reviews involved significant coordination with other CMS staff and the state. In addition to reviewing state documentation, officials from two regional offices told us that they consult other regional office staff who oversee the approval of new expenditures to ensure that expenditures reflect approved program features. For example, officials in region 9 told us that in reviewing managed care expenditures, they consult with their colleagues who review the state’s payment methodologies for capitated payments. In reviewing information technology development expenditures—which are subject to a higher federal matching rate— reviewers for all six selected states examined advanced planning documents, which requires coordination with staff who approve those documents to ensure that the state was receiving the correct matching rates and staying within the approved amounts. With regard to coordination with states, we found that regional reviewers for all six reviews contacted states to follow-up on issues identified during the review. Officials also described being in regular contact with states to stay abreast of program, system, and staffing changes to inform their reviews. For example, according to regional officials, Arkansas experienced some significant and unexpected staffing challenges in 2016 that resulted in delays in the state reporting expenditures and returning federal overpayments, and the reviewer worked closely with state staff to track the state’s progress. We found evidence that reviewers identified errors during their quarterly reviews. In the six quarterly reviews we examined, regional offices identified errors in three of the six states. For example, region 3 reviewers found errors in Maryland’s expenditure reporting—including claims for the wrong matching rate for two enrollees who were not eligible for PPACA’s Medicaid expansion and reporting provider incentive payments on the wrong line—and worked with the state to correct those errors. Additionally, region 9 reviewers found errors in California’s reporting of expenditures. For example, they found that the state reported waiver expenditures for the incorrect time period, which has implications for CMS’s ability to monitor and enforce spending limits for the waiver. Reviewers worked with the state to correct those errors. To supplement the quarterly reviews, CMS generally directs regional offices to conduct a focused financial management review (FMR) each year on an area of high risk within the region, typically within one state. According to regional officials, CMS uses these reviews to investigate expenditures in greater depth and detail than is reasonable within the timeframes of a quarterly review. For example, reviewers can examine individual claims for services from providers or the methodologies developed for certain payment types. Regional reviewers also use these reviews to investigate errors that could not have been detected by the quarterly review. For example, regional office 6 officials told us that they uncovered inappropriate financing arrangements when they used an FMR to examine how Texas financed the state share of its supplemental payments to hospitals in one of its counties. To do so, the regional office reviewed payments from the state to the provider, project plans, and interviewed providers—steps that are not part of the quarterly review process. Rather, in the quarterly review, the reviewer only checks that state-reported payments are supported by state accounting records and are within applicable caps; thus, inappropriate financing of the state share would not have been detected through the quarterly review. In fiscal years 2014 through 2017, CMS used FMRs to review various expenditures considered to be at risk for not complying with Medicaid requirements. Specifically, as outlined in annual work plans, regional offices planned to conduct 31 FMRs and estimated that the total amount of federal funds at risk in expenditure areas covered by their planned reviews was $12 billion. (See app. I.) Planned FMRs targeted a wide range of topics, with the reviews most frequently targeting expenditures for the Medicaid expansion population. (See table 4.) We found that CMS frequently identified compliance issues through FMRs. As of March 2018, CMS reported that reviewers had identified compliance issues with financial impact in 11 of the 31 planned FMRs, though most of those findings were still under review. More findings from the planned FMRs are likely as some of the reviews were still ongoing. We reviewed the draft results for 5 FMRs. Among these, CMS found that four states were reporting expenditures that were not allowable. For example, as noted earlier, a 2014 FMR on supplemental payments in Texas revealed inappropriate funding arrangements, and CMS issued a disallowance for approximately $27 million. In some cases, FMRs did not have apparent financial findings, but identified significant internal control weaknesses in the state and recommended specific corrective actions— such as better aligning eligibility and expenditures systems to better detect and correct irregularities—that would provide greater assurances that federal funds are appropriately spent. Both the quarterly reviews and the FMRs occur in conjunction with other ongoing CMS financial oversight activities. For example in addition to reviewing expenditures, regional office reviewers assess how states estimate their costs, set payment rates for managed care and home and community based services, and allocate costs among different Medicaid administrative activities under their cost allocation plans. CMS officials told us that issues relating to state compliance with Medicaid requirements for expenditures could be identified during these other oversight activities and could inform follow-up during the quarterly reviews or be the subject of a FMR. Officials also told us that since FMRs were instituted, the agency has built in more front-end procedures for preventing problems with the accuracy and allowability of reported expenditures. As examples, they cited their work on managed care rate reviews, among other things. Weaknesses Limit CMS’s Ability to Effectively Target Risk in Its Oversight of Expenditures We identified two weaknesses in how CMS is allocating resources for overseeing state-reported expenditures that limited the agency’s ability to target risk in its efforts to assure that these expenditures are supported and consistent with Medicaid requirements. First, we found that CMS has allocated similar staff resources to states with differing levels of risk. For example, the staff resources dedicated to reviewing California’s expenditures—ranking first nationally in expenditures and constituting 15 percent of all federal Medicaid expenditures—are comparable to significantly smaller states in other regions, despite California’s history of reporting challenges and its inability to provide electronic records, which requires on-site review. (See fig. 3.) CMS has allocated 2.2 staff to review California’s expenditures in contrast to one person to review Arkansas’ expenditures, which constitute 1 percent of federal Medicaid expenditures, and Arkansas does not have a similar history of complex reporting challenges. We also found that California’s reviewers have set a higher threshold for investigating variances in reported expenditures than in the five other selected states. Specifically, reviewers investigated variances in California of plus or minus 10 percent if the variances represented more than 2 percent of medical expenditures, or $450 million in the quarter we reviewed. The state experienced an approximately 24 percent increase in its prescription drug expenditures—roughly $200 million—during that quarter, but the variance was deemed not significant. In contrast, for two of our five other selected states, we found that reviewers generally investigated variances of plus or minus 10 percent regardless of the dollar amount of the variance and in the remaining three states they had significantly lower dollar thresholds than used for California. Second, CMS reported cancelling the FMR requirement for regional offices in 17 out of 51 instances in the last 5 years when faced with resource constraints. In some cases, CMS excused individual regional offices from conducting planned FMRs due to staff shortage as the agency did for regions 3 and 7 in 2014; region 8 in 2016; and regions 3, 7, 8, and 9 in 2018. In 2015, according to CMS officials, all 10 regions were excused from conducting an FMR, because the regional offices needed their staff to focus on implementing new procedures for validating expenditures for the Medicaid expansion population. In addition to cancelling FMRs, CMS was delayed in finalizing FMRs. Among the eight FMRs that were conducted in fiscal year 2014, three have been issued as final reports, CMS decided no report was needed on a fourth, and the four remaining FMRs from 2014 were still under review as of March 2018, delaying important feedback to states on their vulnerabilities. According to CMS officials, resource constraints have contributed to both of these weaknesses. Our analysis of staffing data indicated that, from fiscal years 2014 to 2018, the number of full time equivalent staff dedicated to financial oversight activities declined by approximately 19 percent across all 10 regions. These staff are responsible not only for completing the quarterly reviews and FMRs, but also other financial oversight activities, including resolving audit findings and other on-going oversight activities noted previously. During this period, federal Medicaid expenditures are estimated to have increased by approximately 31 percent, and the reporting of expenditures has grown more complex. In addition to the decline in dedicated staff, officials told us they faced challenges in filling vacancies either because of hiring restrictions or challenges in recruiting qualified candidates. Officials described instances where regional offices shared resources with other offices to address critical gaps in resources. For example region 9 was able to obtain part-time assistance from a region 6 reviewer to help review California’s expenditures. However, CMS officials told us that they had not permanently reallocated resources between regional offices, because all regional offices are under-resourced given their various oversight responsibilities as of May 2018. With regard to cancelling FMRs, CMS officials noted that other oversight responsibilities, including the quarterly reviews, are required under statute or regulation and thus have a higher priority than FMRs. Compounding its resource allocation challenges, CMS has not conducted a comprehensive, national assessment of risk to determine whether resources for financial oversight activities are (1) adequate and (2) allocated—both across regional offices and oversight tools—to focus on the greatest areas of risk. Agency officials told us that they have not conducted a formal risk assessment, because they are assessing risk on an on-going basis, allocating resources within each region accordingly and sharing resources across regions to the extent possible. However, this approach does not make clear whether the level of resources dedicated to financial oversight nationally is adequate given the risk. Federal internal control standards for risk assessment require agencies to identify and analyze risks related to achieving the defined objectives (i.e., assuring that state-reported expenditures are in accordance with Medicaid rules), and respond to risks based on the significance of the risk. Without completing a comprehensive, national assessment of risk and determining whether staff resources dedicated to financial oversight are adequate and allocated commensurate with risk, CMS is missing an opportunity to improve its ability to identify errors in reported expenditures that could result in hundreds of millions of dollars in potential savings to the Medicaid program. Vulnerabilities Exist in CMS’s Review of Expenditures for Which States Receive Higher Federal Matching Rates CMS reviewers in the selected regional offices we reviewed did not consistently perform variance analyses—which compare changes in expenditures from the quarter under review to the previous quarter—of higher matched expenditures during quarterly reviews. Further, the sampling procedures used to examine Medicaid expansion expenditures did not account for varying risks across states. CMS Did Not Consistently Conduct Variance Analyses When Reviewing Certain Types of Expenditures that Receive Higher Federal Matching Rates CMS has multiple procedures in place to review expenditures that receive a higher federal matching rate. As with other expenditures, reviewers are required to complete a variance analysis, comparing reported expenditures in the quarter under review to those reported in the prior quarter and investigating variances above a certain threshold. However, we found that our three selected regional offices were not consistently conducting these analyses across several different types of expenditures with higher matching rates. While CMS’s internal guidance required that regional offices conduct variance analyses on expenditures with higher matching rates, we found that for the quarter we investigated (generally the 1st quarter of fiscal year 2017), our selected regional offices did not consistently do so for three types of expenditures that we reviewed: IHS, family planning, and certain women with breast or cervical cancer. Two of the three regional offices (regions 3 and 9) did not conduct or did not document these required variance analyses, and the remaining regional office (region 6) conducted the analyses but deviated from standard procedures outlined in CMS guidance, as summarized below. CMS region 3. Reviewers did not conduct variance analyses for either Maryland or Pennsylvania. Regional office staff with whom we spoke said that as part of the quarterly review they conduct the standard variance analysis on category of service lines of the CMS-64. Expenditures for IHS, family planning and services for certain women with breast or cervical cancer are not separately identified at that level. Although CMS reviewers said they thought the standard analysis was sufficient, net changes within a broad service category may obscure major changes within these higher matched expenditures. For example, examining changes in total inpatient hospital expenditures would not necessarily reveal a significant variance limited to inpatient expenditures in IHS facilities that receive a higher federal match. CMS region 9. Reviewers told us that they examined higher matched expenditures for California; however, no variance analyses of IHS, family planning, or breast or cervical cancer services were included in the work papers provided to us. In addition, they told us that they do not conduct a variance analysis on IHS, family planning, and services for certain women with breast or cervical cancer for Nevada, noting that expenditures in these areas tend to be quite small. CMS region 6. Reviewers conducted a variance analysis of these higher matched expenditures for Arkansas and Texas and provided us documentation; however, the documentation showed some deviation from the required steps specified in CMS’s guidance. For example, for Texas, spending on two of the three categories was beyond the threshold for significance, but the reviewer did not document any follow-up with the state. Although expenditures for IHS, family planning, and certain women with breast or cervical cancer constituted a small share of total federal spending on Medicaid services—roughly 1 percent—combined spending on these categories was approximately $1 billion in the first quarter of fiscal year 2017. Our analysis indicated that variances in spending for these three services ranged widely across our six states, and in four of the states, some of their expenditures were above the thresholds for significance. (See fig. 4.) For example, in regional office 3, Maryland experienced a significant variance in its family planning expenditures—an increase of approximately $8 million dollars or 7,700 percent from the previous quarter—but there was no indication in the documentation provided that the regional office identified or investigated that variance. Similar to the variance analyses for other higher matched expenditure types, we found that the selected regional offices did not consistently conduct variance analyses on expenditures reported for the Medicaid expansion population. First, although five of our six states opted to expand Medicaid under PPACA, two of the five states (Maryland and Pennsylvania) were not subjected to a variance analysis for their expansion populations, a segment that accounted for nearly $7 billion in Medicaid expenditures in fiscal year 2016. Among the remaining three states, CMS regional office staff conducted a variance analysis, but in two of them, the reviewers did not document whether they investigated significant variances, leaving it unclear whether this required step was taken. Specifically, for two of the three remaining states—Arkansas and Nevada—reviewers did not document which variances were deemed significant or that any such variances were discussed with state officials. The guidance specified in CMS’s quarterly review guide is not always clear or consistent. For example: For IHS, family planning, and certain women with breast or cervical cancer, the guidance is explicit that the analysis is required, but the automated variance report used by reviewers for the step does not include these expenditures. For Medicaid expansion expenditures, the review guide is not explicit about whether a variance analysis is required, but CMS has an automated variance report available for these expenditures, which suggests that such an analysis was expected. The guidance suggests that a variance analysis should be conducted for expansion enrollees; however, it does not specify whether the analysis should be conducted in conjunction with—or take the place of—more in-depth examinations. According to federal internal controls standards for information and communication, agencies should communicate the information necessary for staff to achieve the agency’s objectives. CMS’s guidance on conducting variance analyses for types of expenditures with higher federal matching rates has not been sufficiently clear to assure that such analyses are being consistently conducted. By not consistently conducting such checks, errors may be going undetected and CMS may be providing federal funds at a higher matching rate than is allowable. The Sampling Procedures Used to Examine Medicaid Expansion Expenditures Did Not Account for Varying Risks across States CMS has additional procedures in place to review service expenditures reported for the Medicaid expansion population, a category of expenditures that received a 95 percent federal match in 2017. Specifically, in addition to a variance analysis, CMS guidance specifies that each regional office reviewer is to review claims for a sample. The guide directs the reviewer to obtain a full list of all expansion enrollees from the state and to select 30 to 40 for further review. Next, the reviewer is to obtain supporting documentation from the state listing the eligibility factors for the sampled enrollees, such as age, pregnancy status, Medicare enrollment, and income. The reviewer is to select a single claim for each enrollee and verify that the corresponding expenditures were reported under the correct federal matching rate category—i.e., that the sample claim for each individual was accounted for in the relevant section of the CMS-64. The review guide specifies that the sample review be conducted each quarter unless the state has had four consecutive quarters with three or fewer errors, in which case, the sampling must be performed only annually. We found that regional offices were identifying errors in their sampling reviews. For example, region 3 reviewers found that Pennsylvania had incorrectly categorized an individual in the sample as a Medicaid expansion enrollee, with the selected expenditures initially reported as eligible for the higher matching rate. According to CMS central office officials, the sampling methodology has helped identify systemic issues with state expenditure systems in some states and resulted in corrections, adjustments, and in one case, a disallowance. Under current procedures, among our five selected states that expanded Medicaid under PPACA, all five were determined to have had four consecutive clean quarters according to agency officials; that is, the state had three or fewer errors in each quarter. Nationally, all but one of the 33 states that have implemented Medicaid expansion under PPACA had four consecutive clean quarters as of March 2018, according to CMS officials. We found, however, that CMS’s procedures for sampling reviews had a key weakness in that they did not account for varying risks across states, as illustrated in the following examples. We found that sample size does not account for significant differences in program size. For example, both California and Arkansas have expanded Medicaid under PPACA, and regional office staff told us they reviewed claims for 30 expansion enrollees in each of the two states, despite the fact that California has over 10 times as many expansion enrollees as Arkansas. Region 9 officials told us that for California they had initially sampled 100 enrollees during the first quarter they were required to conduct this analysis, but the review was time consuming given staff resources, and they were advised by CMS’s central office to limit their sample to 30 individuals. CMS officials told us that the sampling procedures are resource intensive and that the sample size they decided upon was what they thought they had the resources to complete. Additionally, the sample size does not account for previously identified risks in a state’s program. Specifically, as we noted in a 2015 report, CMS’s sampling review of expansion expenditures was not linked to or informed by reviews of eligibility determinations conducted by CMS, some of which identified high levels of eligibility determination errors. According to CMS officials, the expenditure review is primarily intended to ensure that states are correctly assigning expenditures for the expanded eligibility groups as initially determined, not whether the eligibility determination is correct. Federal standards for internal control related to risk assessment require that agencies identify, analyze, and respond to risks. However, because CMS’s sampling methodology does not account for risk factors like program size and high levels of eligibility determination errors, the agency’s review of expansion population expenditures may be missing opportunities to detect systemic issues with improperly matched expenditures. Quarterly variance analyses and sampling of Medicaid expansion enrollees can be supplemented by financial management reviews. For fiscal year 2016, CMS recommended regional offices conduct FMRs on expenditure claims for expansion enrollees. As of March 2018, however, regional offices had completed an FMR on Medicaid expansion expenditures in only one state, with no findings, and were in the process of completing FMRs for five other states. According to CMS officials, no additional reviews in this area were planned for fiscal year 2018. CMS Resolved over $5.1 Billion in Expenditure Errors in Fiscal Years 2014 through 2017 Financial Impact of Expenditure Reviews Compared with Program Integrity Recoveries The impact of CMS’s expenditure review activities is greater than the impact from other program integrity efforts. For example, in fiscal year 2015, CMS resolved errors through expenditure reviews that saved over $1.4 billion in federal funds. In the same year, CMS reported that efforts by states and the federal government to identify improper payments to providers—for example, services that were billed by a provider but were not received by a beneficiary—resulted in recoveries that totaled $852.9 million, in both state and federal funds. In fiscal years 2014 through 2017, CMS’s regional offices resolved expenditure errors that reduced federal spending by over $5.1 billion, with at least $1 billion in errors resolved in each of three of those four years. Errors were resolved through states agreeing to reduce their reported expenditures, which prevented federal payments to the state for those expenditures; and through CMS issuing disallowances, under which states are required to return federal funds. Although CMS resolved over $1 billion in expenditure errors in each year of fiscal years 2014 through 2016, CMS resolved less than $600 million in fiscal year 2017. CMS officials explained that this change likely reflects delays in clearance of disallowances due to the transition between presidential administrations. (See fig. 3.) In addition to these resolved errors, as of the end of 2017, CMS had $4.47 billion in outstanding deferrals of federal funds, where CMS was delaying federal funds until additional information was provided. Expenditures flagged for deferrals may or may not represent errors. All 10 CMS regional offices resolved errors from fiscal years 2014 through 2017, though the magnitude varied across regions. (See table 5.) Among the 10 regional offices, 9 reported that they had resolved errors through states agreeing to reduce reported expenditures. Additionally, 9 regional offices issued a total of 49 disallowances across 16 states, with the majority of the disallowances occurring in regional offices 2 and 3. Finally, all 10 regional offices had taken deferrals for questionable expenditures, with 22 states having outstanding “active” deferrals that had not been resolved as of the fourth quarter of fiscal year 2017, which ranged in amount from $178 to $444 million. CMS officials told us that the range of resolved errors and deferred funds across regional offices may reflect differences in the proportion of high-expenditure states. For example, regional office 4 oversees four states ranking in the top 20 in terms of Medicaid expenditures, while regional office 8 does not oversee any top- 20 states. The variation may also reflect large actions taken in specific states. For example, the majority of the disallowed funds in regional office 2 from fiscal years 2014 to 2017 were due to a single disallowance of $1.26 billion in one state. The financial significance of individual errors resolved by CMS’s regional offices varied significantly. We found that regional offices resolved errors that ranged from reporting errors that had no federal financial impact— such as expenditures that were allowable, but were reported on the incorrect line—to hundreds of millions of dollars in expenditures that were found to be unallowable under Medicaid requirements. Over the fiscal years we reviewed, more than half of the disallowances CMS issued were less than $15 million; however, in four states CMS issued disallowances of over $100 million, including a disallowance of over $1 billion in New York. (See fig. 5.) In some cases, actions taken by CMS to resolve errors were the culmination of years of work. For example, over several years the California Medicaid program reported a large volume of expenditures for which it did not yet have sufficient supporting documentation. The regional office officials told us that the state reported these expenditures in order to comply with the 2-year filing limit, and had reported these as “placeholder claims,” with the intention of providing additional support at a later time. Over the course of at least 6 years, CMS deferred hundreds of millions of dollars in federal funds related to these placeholder claims. Of the active deferrals as of the end of fiscal year 2017, most of the total amount of deferred funds was taken for expenditures in California, which represented $3.4 billion of the $4.5 billion in total active deferrals. According to CMS officials, in 2015, CMS prohibited California from reporting additional placeholder claims. Region 9 officials told us that they continue to work with the state to clear the deferrals related to this issue. They were able to resolve 9 related deferrals in fiscal year 2017; however, another over 60 deferrals were still unresolved. Conclusions The growth of federal Medicaid expenditures, estimated at about $370 billion in fiscal year 2017, makes it critically important to assure expenditures are consistent with Medicaid requirements. CMS has a variety of processes in place to review state-reported expenditures, and those reviews have resulted in CMS resolving errors that have saved the federal government a considerable amount of money; over $5 billion in the last 4 years. However, the increasing complexity of expenditure reporting is occurring as resources to review these expenditures are decreasing, hindering CMS’s ability to target risk and potentially allowing for hundreds of millions of federal dollars in errors to go undetected. In the absence of a comprehensive risk assessment, which CMS has not conducted, CMS may be missing opportunities to better target resources to higher risk expenditures and increase the savings from these oversight activities. The variety of different matching rates has contributed to the increased complexity of CMS’s expenditure reviews. Although CMS has review procedures in place to assure that the correct matching rate is applied for services and populations receiving a higher federal matching rate, unclear guidance has contributed to inconsistency in the extent to which these reviews are conducted. In addition, we found weaknesses in the sampling methodology CMS requires its regional offices to use to help ensure that expenditures for Medicaid expansion enrollees—expenditures that receive a higher matching rate and that represented almost 20 percent of total federal Medicaid spending in 2016—are consistent with Medicaid requirements. In particular, the methodology does not account for risk factors like program size or vulnerabilities in state eligibility-determination processes and systems. As a result of the inconsistency in reviews and a sampling methodology that does not consider program risk, errors may be going undetected, resulting in CMS providing federal funds at higher federal matching rates than is allowable. In addition, CMS could be missing opportunities to identify any systemic issues that may contribute to such errors. Recommendations We are making the following three recommendations to CMS: 1. The Administrator of CMS should complete a comprehensive, national risk assessment and take steps, as needed, to assure that resources to oversee expenditures reported by states are adequate and allocated based on areas of highest risk. (Recommendation 1) 2. The Administrator of CMS should clarify in internal guidance when a variance analysis on expenditures with higher match rates is required. (Recommendation 2) 3. The Administrator of CMS should revise the sampling methodology for reviewing expenditures for the Medicaid expansion population to better target reviews to areas of high risk. (Recommendation 3) Agency Comments We provided a draft of this report to HHS for review and comment. HHS concurred with all three recommendations, noting that it takes seriously its responsibilities to protect taxpayer funds by conducting thorough oversight of states’ claims for federal Medicaid expenditures. Regarding our first recommendation—that CMS complete a comprehensive, national risk assessment and take steps to assure that resources are adequate and allocated based on risk—HHS noted that CMS will complete such an assessment, and, based on this review, will determine the appropriate allocation of resources based on expenditures, program risk, and historical financial issues. CMS will also identify opportunities to increase resources. Regarding our second recommendation—clarifying internal guidance on when a variance analysis on higher matched expenditures is required—HHS noted that CMS will issue such internal guidance. Regarding our third recommendation—that CMS revise the sampling methodology for reviewing expenditures for the Medicaid expansion population to better target reviews to areas of high risk—HHS noted CMS is considering ways to revise its methodology. HHS’s comments are reproduced in appendix II. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Health and Human Services, appropriate congressional committees, and other interested parties. The correspondence is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7114 or yocomc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: CMS Financial Management Review (FMR) Topics and Estimated Amounts at Risk, Fiscal Years 2014 through 2017 Regional office 1 Public psychiatric residential treatment facilities Medicare Part B premium buy-ins Outpatient hospital reimbursement for mental health services Review of comprehensive psychiatric emergency program rates Provider taxes implemented to avoid program reductions Health homes data and expenditures reporting Provider incentive payments for health information technology 1115 demonstration provider incentive payments Public psychiatric residential treatment facilities Managed care organizations’ provider payments Provider incentive payments for health information technology Federally qualified health center reimbursement payments Eligibility and enrollment maintenance and operations Managed care organizations’ reporting of drug rebates 3 CMS cancelled these 2014 FMRs due to a staffing shortage. Region 8 was excused from the requirement to conduct an FMR in 2016 due to staffing constraints. Appendix II: Comments from the Department of Health and Human Services Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Susan Barnidge (Assistant Director), Jasleen Modi (Analyst-in-Charge), Caroline Hale, Perry Parsons, and Sierra Gaffney made key contributions to this report. Also contributing were Giselle Hicks, Drew Long, and Jennifer Whitworth.
Why GAO Did This Study Medicaid has grown by over 50 percent over the last decade, with about $370 billion in federal spending in fiscal year 2017. CMS is responsible for assuring that expenditures—reported quarterly by states—are consistent with Medicaid requirements and matched with the correct amount of federal funds. CMS's review of reported expenditures has become increasingly complex due to variation in states' Medicaid programs and an increasing number of different matching rates. GAO was asked to examine CMS's oversight of state-reported Medicaid expenditures. In this report, GAO examined how CMS assures that (1) expenditures are supported and consistent with requirements; and (2) the correct federal matching rates were applied to expenditures subject to a higher match. GAO also examined the financial impact of resolved errors. GAO reviewed documentation for the most recently completed quarterly reviews by 3 of CMS's 10 regional offices for six states that varied by Medicaid program expenditures and design. GAO also reviewed policies, procedures, and data on resolved errors; and interviewed CMS and state officials. GAO assessed CMS's oversight processes against federal standards for internal control. What GAO Found The Centers for Medicare & Medicaid Services (CMS), which oversees Medicaid, has various review processes in place to assure that expenditures reported by states are supported and consistent with Medicaid requirements. The agency also has processes to review that the correct federal matching rates were applied to expenditures receiving a higher than standard federal matching rate, which can include certain types of services and populations. These processes collectively have had a considerable federal financial benefit, with CMS resolving errors that reduced federal spending by over $5.1 billion in fiscal years 2014 through 2017. However, GAO identified weaknesses in how CMS targets its resources to address risks when reviewing whether expenditures are supported and consistent with requirements. CMS devotes similar levels of staff resources to review expenditures despite differing levels of risk across states. For example, the number of staff reviewing California's expenditures—which represent 15 percent of federal Medicaid spending—is similar to the number reviewing Arkansas' expenditures, which represents 1 percent of federal Medicaid spending. CMS cancelled in-depth financial management reviews in 17 out of 51 instances over the last 5 years. These reviews target expenditures considered by CMS to be at risk of not meeting program requirements. CMS told GAO that resource constraints contributed to both weaknesses. However, the agency has not completed a comprehensive assessment of risk to (1) determine whether oversight resources are adequate and (2) focus on the most significant areas of risk. Absent such an assessment, CMS is missing an opportunity to identify errors in reported expenditures that could result in substantial savings to the Medicaid program. GAO also found limitations in CMS's processes for reviewing expenditures that receive a higher federal matching rate. Internal guidance for examining variances in these expenditures was unclear, and not all reviewers in the three CMS regional offices GAO reviewed were investigating significant variances in quarter-to-quarter expenditures. Review procedures for expenditures for individuals newly eligible for Medicaid under the Patient Protection and Affordable Care Act were not tailored to different risk levels among states. For example, in its reviews of a sample of claims for this population, CMS reviewed claims for the same number of enrollees—30—in California as for Arkansas, even though California had 10 times the number of newly eligible enrollees as Arkansas. Without clear internal guidance and better targeting of risks in its review procedures for expenditures receiving higher matching rates, CMS may overpay states. What GAO Recommends GAO is making three recommendations, including that CMS improve its risk-based targeting of oversight efforts and resources, and clarify related internal guidance. The Department of Health and Human Services concurred with these recommendations.
gao_GAO-19-91
gao_GAO-19-91_0
Background FTZ Benefits Example of Foreign-Trade Zones (FTZ) Benefits The FTZ Board might authorize an automobile manufacturer that imports foreign-source components, such as engines and transmissions into an FTZ, to pay the customs duty rate on the value of the finished vehicles (2.5 percent) instead of the sum of the duties owed for certain imported components. Duty rates for those components generally range from 0 percent to approximately 10 percent. As a result, the company would pay lower custom duties to manufacture automobiles in an FTZ than it would pay outside the FTZ. To encourage companies to maintain and expand their operations in the United States, the FTZ program offers a range of benefits, including the possible reduction or elimination of duties on certain imported goods. For example, a company operating in an FTZ that manufactures products using foreign materials or components can pay lower overall duties by electing to pay the duty rate for the finished product rather than for the product’s imported foreign component parts, which may have a higher duty rate (see sidebar). This benefit provides an incentive to companies to manufacture in the United States rather than move their manufacturing operations overseas to avoid paying U.S. duties. We reported in July 2017 that, while FTZs were created to provide benefits to the American public, little is known about their overall economic impact. Few economic studies have focused on FTZs, and those studies have not quantified economic impacts or examined the effect of companies’ FTZ status on regional and overall economic activity such as employment. As of June 2018, there were 262 approved FTZs in the United States, with at least 1 in each state and in Puerto Rico, according to Board staff. Most FTZs consist of multiple physical locations, known as sites or subzones, which include individual companies’ plants as well as multi- user facilities such as seaports or airports. FTZ Board and CBP Responsibilities According to Board staff, the Board’s responsibilities include, among others, approving the establishment of FTZs and reviewing notifications and applications for production authority. The Board must authorize any proposed production activity before a company can bring into an FTZ the specified foreign-source materials or components for incorporation into a final product and to potentially receive FTZ benefits. Current Board staff are Commerce employees and comprise an Executive Secretary, eight staff analysts who gather and analyze information for the Board’s consideration, and a coordinator who handles clerical tasks, according to Board staff. CBP is responsible for oversight and supervision of FTZ operators, including the collection of duties, taxes, and fees. CBP reviews production notifications and applications with respect to its ability to provide oversight and ensure program compliance and informs the Board of its ability to oversee a proposed production activity if it were to be authorized. Production Notification and Application Processes Federal regulations set forth processes and procedural rules for companies applying for, and operating in, FTZs as well for the Board’s evaluation of notifications and applications for production authority, pursuant to the FTZ Act of 1934 as amended. According to Board staff, the Board issued updated and modified regulations for FTZs in February 2012 to simplify the application process and expedite the review of applications when possible. The Board staff stated that they took into consideration comments from industry, including companies whose production activities require authorization decisions within short time frames, when updating the regulations. The 2012 regulations divided the production application process into two processes to create a less resource-intensive process for companies and the U.S. government, according to Board staff. Board staff said that the 2012 regulations allow the Board to approve notifications and applications with restrictions. For example, the Board may decide to, among other things, (1) authorize the exemption of duty payments on some, but not all, components named in the notification for the proposed production activity; (2) authorize the activity for a limited time period; or (3) authorize the activity for a specified quantity of the component to be brought into the FTZ. The following describes the notification and application processes under the 2012 regulations. Notification process. A company must first submit a production notification—which requires less information from companies than a production application—requesting production authority in an FTZ. If the Board approves a company’s notification, the company can begin the production activity. For example, in a 2013 notification, a company requested authority to produce printing plates used in the newspaper industry and to pay duties at the duty rate applicable to the final product (i.e., printing plates) instead of the duty rates applicable to the five individual foreign-source components (e.g., aluminum coils). The Board approved the notification without restrictions, allowing the company to begin conducting the authorized activity. If a notification is approved with restrictions, the company may begin the production activity while adhering to the specified restrictions. For example, in another 2013 notification, a company requested authority to produce sports safety helmets, bicycle baby seats, and bicycle car-carrier racks and pay duties on the final products instead of paying individual duties on some foreign-source components (e.g., helmet and baby seat parts). The Board approved the notification with a restriction, authorizing the company to begin the production activity but requiring it to pay duty on one foreign-source component (textile bags). Application process. According to Board staff, if a notification is approved with restrictions or denied, the company may file a more detailed production application to continue seeking authority for the activity that was restricted or denied. If the Board does not unanimously decide to authorize the application with or without restrictions, the production authority is denied. For example, in 2012, the Board determined that a notification requesting that a company’s existing authority to produce plastic adhesive bandages in an FTZ be expanded to include production of fabric adhesive bandages using foreign-source textile components warranted further review and denied the notification. The company subsequently filed an application for the expanded authority, providing additional information to support its request, which the Board also denied. A company whose application is denied may appeal the Board’s decision to the U.S. Court of International Trade. According to Board staff, the production application process is similar to the application process under the pre-2012 regulations. Figures 1 and 2 provide an overview of the Board processes for considering notifications and applications for production authority. Criteria Relevant to Evaluation of Production Notifications and Applications The 2012 regulations detail criteria for the Board to consider when reviewing notifications and applications. These criteria include threshold and economic factors as well as consideration of significant public benefits (see table 1). According to the regulations, if the Board determines that any of the threshold factors apply to a proposed or ongoing production activity, it shall deny or restrict authority for the activity. After reviewing the threshold factors, if there is a basis for further consideration of the application, the Board shall consider economic factors listed in the regulation when determining the net economic effect of the proposed activity. The regulations’ requirements for the Board to consider these criteria when reviewing notifications and applications differ as follows (italics added for emphasis): Notifications. Section 400.37 of the regulations states that the Executive Secretary’s recommendation shall consider, among other things, comments submitted in response to the notification in the context of the factors set forth in section 400.27. The regulation does not state that the Executive Secretary’s recommendation must consider each factor individually. Applications. Section 400.27 states that the Board shall apply the criteria set forth therein. According to section 400.27, the Board must first review the threshold factors and after its review, if there is a basis for further consideration of the application, must consider all of the listed economic factors when determining the net economic effect of the proposed activity. Additionally, the Board is to take the threshold factors and economic factors into account in considering the significant public benefit(s) that would result from the production activity. Board staff observed that the notification process is designed for identifying concerns related to the proposed production authority, not for resolving such concerns. If the Board identifies any concerns that it deems significant enough to deny a notification, the application process allows the Board to collect more information to inform further analysis. Board staff stated that examples of concerns related to production notifications and applications might include objections from domestic producers of component materials, such as textiles, who believe they would be negatively affected by duty reduction on foreign-source components used in the proposed production activity. According to the Board, of the 293 production notifications submitted from April 2012 through September 2017 for which it rendered decisions, 218 notifications were approved without restrictions, 62 were approved with restrictions, and 13 were not approved (see fig. 3). For further information about the Board’s decisions for the 293 notifications by industry category, see appendix II. Of the companies that submitted the 75 production notifications approved with restrictions or not approved from April 2012 through September 2017, nine companies subsequently submitted production applications. As of September 2017, the Board had authorized two of these applications with restrictions and had not authorized one application, according to Board staff. For the remaining six applications, the Board had not authorized one application and the Board’s decisions were pending for the other five applications as of August 2018. FTZ Board Followed Procedures Generally Aligned with Regulations in Evaluating Production Notifications We Reviewed The Board’s Procedures for Evaluating Notifications Generally Align with Regulations Our review of Board documents and interviews with Board staff found that the Board has established procedures for the evaluation of notifications that generally align with the Board’s regulations. The Board’s procedures for evaluating notifications can be organized into three phases: (1) information collection, (2) analysis and recommendation, and (3) authorization decision (see fig. 4). Each phase includes steps specifying the responsible party and the intended product and result. In general alignment with the regulations, the Board’s procedures for evaluating production notifications include steps for collecting information from the notifications, from public comments submitted in response to Federal Register notices of the notifications, from reviews of the notifications by industry specialists at Commerce and other agencies, and from CBP regarding its ability to oversee the proposed production activity. Notification information. The regulations specify that notifications must (1) provide the identity and location of the FTZ user; (2) identify the materials, components, and finished products associated with the proposed activity; and (3) include information as to whether any material or component is subject to a trade-related measure or proceeding, such as orders for antidumping duties. The Board procedures require staff to determine whether a notification is complete before beginning to evaluate it. To help companies complete the application, Board staff provide an instruction sheet listing the information required by the regulations. Federal Register comments. The Board regulations require the Executive Secretary to invite public comments in response to a Federal Register notice, unless the Executive Secretary determines, based on the notification’s content, to recommend further review without inviting public comment. The Board procedures instruct staff to publish a notice in the Federal Register after determining that the notification is complete. Agencies’ reviews. The Board regulations do not require that industry specialists review notifications. The Board procedures instruct staff to request industry specialists at Commerce and, as appropriate, at other agencies to review the notifications. CBP comments. The Board regulations do not require Board staff to request CBP comments for notifications. The Board procedures instruct staff to prepare a letter to the CBP Port Director. According to CBP officials and guidance, CBP provides comments regarding its ability to oversee the proposed production activity to help ensure FTZ program rules and regulations are followed if it is approved. Phase 2: Analysis and Recommendation In general alignment with the regulations, the Board’s procedures for evaluating production notifications include steps to guide staff in considering the information collected and in preparing a recommendation to the Board regarding whether to approve the notification. Review of comments and other relevant factors. The Board regulations require that the Executive Secretary’s recommendation to the Board consider any comments submitted in response to the Federal Register notice; guidance from specialists within the government; and other relevant factors based on Board staff’s assessment of the notification in the context of the criteria, including threshold and economic factors listed in section 400.27. The Board procedures require staff evaluating notifications to consider any public comments submitted in response to the Federal Register notice and comments from industry specialists and CBP Recommendations and memos. The Board regulations do not require Board staff to prepare recommendations or memos. The Board procedures require staff to use a prescribed format to prepare a recommendation, based on the information collected, regarding whether a notification should be approved (with or without restrictions) or not approved because further review of the proposed production activity is warranted. The staff also must prepare memos for the Treasury and Commerce Board members. The staff are to provide the memos with the recommendation to the Executive Secretary for review before sending them to the Board members. Phase 3: Authorization Decision In general alignment with the regulations, the Board’s procedures for evaluating production notifications include steps for the Executive Secretary to make a recommendation to the Board for its consideration and for Board staff to notify the applicant of the Board’s decision and to ensure that evaluation of the notification is completed within specified time frames. Executive Secretary’s recommendation and Board’s decision. The Board regulations specify that the Executive Secretary is required to submit a recommendation to the Board regarding whether further review of all or part of the proposed production activity is warranted. The Board procedures require the Executive Secretary to review the memos and recommendations prepared by the Board staff and submit them to the Board members for their review and concurrence with the recommendation. Notice to applicant. The Board regulations require the Executive Secretary to inform the applicant of the Board’s decision regarding authorization of the notification. Similarly, the Board procedures require Board staff to notify the applicant of the Board’s decision. Evaluation time frames. The Board regulations and procedures specify time frames for notification evaluation. For example, under the regulations, the Executive Secretary shall submit to the Board a recommendation on whether further review of all or part of the activity subject to the notification is warranted within 80 days of receipt of the notification. Similarly, the procedures state that Board staff will ensure that the recommendation is finalized so that the recommendation and memos can be sent to the Board members within 80 days of receipt of the notification. In addition, the regulations and procedures require that the applicant be informed of the Board’s decision about the notification within 120 days. FTZ Board Followed Its Procedures in Evaluating Production Notifications We Reviewed Phase 1: Information Collection Our analysis of Board case records for 59 notifications and our interviews with Board staff and Commerce, Treasury, and CBP officials showed that when evaluating the notifications, the Board followed its procedures in collecting the required information from the applicants; inviting public comments in response to Federal Register notices; requesting reviews from specialists at other agencies and Commerce; and, for most notifications, requesting CBP comments. The Board collected the required information from applicants for the 59 notifications we reviewed. All of the notifications included (1) the identity and location of the FTZ user; (2) the materials, components, and finished products associated with the proposed activity; and (3) information on whether any material or component was subject to a trade-related measure or proceeding. For 5 of the 59 notifications we reviewed, Board staff recommended further review of the proposed activity on the basis of the applicant information and staff knowledge of the industry, according to Board staff. The staff explained that if the Board is aware of issues that would require a more detailed review of the proposed activity, the Board can decide, without collecting additional information, not to approve the notification. In such cases, the company must file a more detailed application if it wants to proceed with its request for production authority. For example, for 2 of these 5 notifications, Board staff recommended further review without collecting additional information because they were already reviewing production applications requesting similar production authorities for carbon fiber. For another notification, staff recommended further review without collecting additional information because the Board had not previously reviewed a similar request and the staff needed the additional information that would be collected through the application evaluation process. Of the five companies that submitted these 5 notifications, three companies decided to submit applications for production authority. For the remaining 54 notifications, Board staff published notices in the Federal Register and received public comments on 5 of them. The comments included both opposition and support from domestic producers and associations. For example, in comments responding to one of the notifications, a company opposed authorization of the proposed activity because the company believed that the activity, if approved, would likely have a negative impact on the domestic silicon metal industry. According to the comments, the price of silicon metal had declined significantly and granting the requested production authority would result in further downward pressure on U.S. silicon metal prices. In comments responding to another notification, a company supported the proposed extension of FTZ authority to produce upholstered furniture and related parts. The comments stated that the activity would, among other things, encourage production in a related industry, domestic thread production. Board staff sought and received reviews of the 54 notifications from industry specialists in six Commerce offices, including the Offices of Textiles and Apparel, Consumer Goods, Materials, and Energy and Environmental Industries. The specialists recommended approving 49 of the notifications (with or without restrictions) and not approving the remaining 5 notifications because further review was warranted. For example, for one notification, an industry specialist’s review recommended approval, noting that the competitive landscape in Puerto Rico—the FTZ’s location—had changed and some industry sectors had shifted manufacturing to foreign locations. According to the review, approval of the notification would therefore contribute to maintaining manufacturing operations in Puerto Rico, which would provide employment and an economic boost to the national economy. For a second notification, an industry specialist’s review recommended denying the requested production authority because of concerns about the possible effect of importing a textile component that was being produced domestically. The review stated that if the notification were approved, the company would avoid paying duties on the textile component, resulting in a significant incentive for the use of imported products over those produced domestically. For a third notification, the Board staff requested and received comments from the Department of Justice regarding a firearm import regulation for a notification seeking production authority for the demilitarization (or disassembly) of munitions and other explosive components. According to the industry specialists who had reviewed notifications in our sample, their analyses were based on their knowledge of the industry, including domestic manufacturers of components that applicants sought to import into an FTZ, and on public comments submitted to the Federal Register, among other things. For 6 of the 59 notifications, Board staff did not ask CBP about its ability to oversee a proposed production activity because the staff were recommending further review of the notification. For the remaining 53 notifications, we found that the Board requested comments from CBP regarding its ability to provide oversight. Phase 2: Analysis and Recommendation We found that Board staff followed the Board’s procedures in reviewing comments and other relevant factors for all notifications in our sample and providing recommendations to the Board regarding authorization of the notifications. Review of Comments and Other Relevant Factors Our review of Board case records found that Board staff prepared evaluations for all 59 of the notifications we reviewed, documenting consideration of public comments, any agency specialists’ reviews, and CBP comments. In addition, although the regulations do not explicitly require consideration of the criteria listed in the regulations when evaluating notifications, Board staff informed us that they always considered economic and threshold factors when they had collected information that identified potential areas of concern. Our review of the case records for the 59 notifications found that some of the factors Board staff considered included whether similar production authority had been granted in the past for another company and whether concerns had been raised by domestic industries. For example, for one notification requesting production authority for wind turbine components, the Board staff’s evaluation noted that the Board had previously approved production authority involving wind turbines and related components for other companies. For another notification, requesting production authority to import a foreign-source textile fabric for adhesive bandages duty free, the Board staff’s evaluation noted that similar requests claiming lack of availability of domestically produced textile fabric at competitive prices had been strongly disputed by domestic producers, trade associations, or both. More than half of the Board staff evaluations of the notifications we reviewed included a discussion of economic factors, and nearly a third included discussion of threshold factors. For example, 15 evaluations discussed the proposed activity’s potential impact on related domestic industries. The evaluation of a notification requesting authority to produce customized plastic containers stated that a domestic company producing reusable plastic containers opposed the request on the grounds that the proposed activity could harm that company in the U.S. market. In addition, 13 evaluations discussed exporting and re-exporting finished products. For example, an evaluation of a notification requesting authority to produce automotive textile upholstery material noted that the company did not intend to enter the finished product into the U.S. market for domestic consumption (i.e., the company would re-export the finished product for sale outside the U.S. market). Our review of case records for the 59 notifications found that the Board staff prepared recommendations for each notification and also prepared memos to the Treasury and Commerce Board members for the Executive Secretary’s review before providing them to the Board members. Reasons noted in recommendations to authorize a production activity without restrictions included prior authorization of a similar activity or lack of impact on domestic industry. Reasons for recommending denial of authorization included new or complex policy issues that required further review. Recommendations to authorize an activity with restrictions included restrictions on the quantity of a component that could be imported duty-free into an FTZ, on the amount of time for which a production activity would be authorized (e.g., 5 years), and on the eligibility of some components for FTZ benefits. For example, for one notification requesting authority to produce upholstered furniture, the memo recommended, among other things, restricting the amount of a specific foreign-source fabric that could be imported duty free into an FTZ and requiring that all other foreign-source fabrics be admitted to an FTZ under duty-paid status. We found that for all 59 notifications, the Board staff’s recommendations were in agreement with the industry specialists’ comments. Phase 3: Authorization Decision Our review of the 59 sample notifications found that for each notification, the Board’s Executive Secretary followed the Board’s procedures in submitting a memo to the Board with recommendations for its decision and notifying the applicants of the decision. In addition, the Board staff generally followed time frames listed in the procedures. Executive Secretary’s Recommendation and Board’s Decision The Board’s Executive Secretary submitted a memo to the Board recommending approving, approving with restrictions, or not approving each of the 59 notifications we reviewed. The Executive Secretary recommended approving 34 notifications, approving 15 notifications with restrictions, and denying 10 notifications (see fig. 5). We found that the Executive Secretary’s recommendations concurred with the Board staff’s recommendations for all 59 notifications and that the Commerce and Treasury Board members concurred with the FTZ Executive Secretary’s recommendations for 56 of the 59 notifications. For the remaining 3 notifications, the Executive Secretary recommended that further reviews were warranted and the Commerce Board member concurred. Because the notification was not approved, the Executive Secretary did not contact the Treasury Board member for his concurrence. According to Board staff, a notification will not be approved if at least one Board member determines further review is needed. See appendix III for more information about the Board’s decisions for the 59 notifications in our sample. For all 59 notifications, Board staff informed the applicant of the Board’s decision. For the majority of the notifications in our sample, the Board generally followed time frames listed in the procedures. For example, for 46 of the 59 notifications, the Board informed the applicant of its decision within 120 days after the notification’s submission, as required by the regulations and procedures. The other 13 cases were completed within 122 to 160 days. According to Board officials, processing some notifications took more time because of a government shutdown or internal procedural delays. (See app. IV for more information about the processing times for notifications in our sample.) The Board staff also noted that even when a case was delayed, processing the notification took less time than if the company had submitted an application under the production application process before the regulations were revised in 2012. According to Board staff, the notification process is designed to ensure that the applicant receives an authorization decision within 120 days. Board staff stated that, in general, any issues arising during evaluation of a production notification will lead to an authorization with restriction or denial of the notification, since decisions on the merits of such issues would require extended comment and rebuttal periods and additional analysis that could not be completed within the 120-day time frame for notifications. Board staff stated that, in these cases, a company can choose to submit a more detailed application, triggering the Board’s application evaluation process. Among the companies that filed the 59 production notifications we reviewed, three companies whose notifications were not approved had filed a more detailed application for production authority as of September 2017. FTZ Board Followed Procedures Generally Aligned with Regulations in Evaluating Applications We Reviewed, but It Did Not Consistently Document Consideration of All Required Criteria Board’s Procedures Generally Align with Regulations for Evaluating Production Applications Our review of Board documents and interviews with Board staff showed that the Board has established procedures for evaluating production applications that generally align with its regulations. The Board’s procedures for evaluating production applications can be organized into the same three phases as those for evaluating production notifications— (1) information collection, (2) analysis and recommendation, and (3) authorization decision—although some of the requirements differ (see fig. 6 for an illustration of the Board’s application process). For each phase, the procedures include steps that specify the responsible party and the intended product and result. In general alignment with the regulations, the Board’s procedures for evaluating production applications include steps for collecting information from the applications, from public comments submitted in response to Federal Register notices of the applications, from reviews of the applications by industry specialists at Commerce and other agencies, and from CBP. Application information. The Board regulations require the applicant to provide detailed information about the proposed production activities, such as (1) a summary of the reasons for the application, including a description of the finished products and imported components; (2) the estimated annual value of benefits to the applicant; and (3) an explanation of the requested production authority’s anticipated economic effects. To guide companies in completing applications, the Board provides an application instruction sheet with numerous questions, many of which are similar to requirements listed in the regulations. The Board’s procedures require Board staff to determine whether the application is complete before beginning to evaluate it. Federal Register comments. The Board regulations require that, after Board staff determine that the application satisfies regulatory requirements, the Executive Secretary shall, among other things, publish a notice in the Federal Register inviting public comments. Similarly, the Board procedures require the preparation of a notice for the Executive Secretary’s review and signature that will be transmitted to the Federal Register. Agencies’ review. While the Board’s regulations do not specifically require Board staff to ask industry specialists to review the production applications, the procedures instruct staff to consult with industry specialists at Commerce and other agencies as appropriate. See the text box for a description of production application reviews by industry specialists in Commerce’s Office of Textiles and Apparel (OTEXA). Description of Production Application Review by Department of Commerce Industry Specialists According to industry specialists at the Department of Commerce, when Foreign-Trade Zones (FTZ) Board staff receive an application pertaining to textiles products, they forward the application to the department’s Office of Textiles and Apparel (OTEXA). OTEXA officials then issue a mass mailing alerting industry (i.e., nongovernment) representatives that a textile case was submitted. In addition, the industry specialists said that the department co-manages the Industry Trade Advisory Committee on Textiles and Clothing, consisting of 23 vetted advisory committee members representing domestic producers, importers, retailers, distributors and associations, among others. The specialists stated that OTEXA officials would notify this committee about the Federal Register notice for the textile application to help ensure that the industries have seen the notice. According to the industry specialists, OTEXA will thoroughly review the case, taking into account public comments, and submit a memo with a recommendation to the FTZ Board staff for consideration. The specialists stated that the main purpose of OTEXA’s review is to determine whether the applicant is seeking to bring into an FTZ a textile component that is being manufactured domestically. According to the specialists, if OTEXA determines that the component is manufactured domestically, it will recommend to the FTZ Board staff that the application should not be authorized. The industry specialists said that lack of opposition to the application usually indicates that there is no domestic manufacturer of the product. CBP’s review. The regulations require the Executive Secretary to provide the application and Federal Register notice to CBP for review and require CBP to submit any comments about the application to the Executive Secretary by the conclusion of the Federal Register public comment period. Similarly, the Board procedures require Board staff to prepare a letter to the CBP Port Director. According to the Board staff and CBP officials, a letter is sent to the local CBP Port Director to collect information on CBP’s ability to provide oversight and help ensure that FTZ program rules and regulations are followed if the activity is authorized. The Board Followed Its Procedures in Evaluating Production Applications We Reviewed Phase 1: Information Collection Our review of available documents for each of the three applications in our sample indicate that Board staff followed the Board’s procedures in collecting information from companies, publishing notices and obtaining public comments from the Federal Register, and gathering comments from agencies such as Commerce and CBP. All three companies requested authority to import textiles from foreign suppliers into an FTZ for use in manufacturing products that would be later imported from the FTZ into the U.S. market for consumption. The Board staff collected information from all three companies’ applications. For example, each company provided information regarding (1) reasons for the application and an explanation of its anticipated economic benefits; (2) the estimated total annual value of benefits of the proposed activity to the company; (3) whether the activity was consistent or inconsistent with U.S. trade and tariff law or policy formally adopted by the executive branch; (4) whether approval of the activity under review would seriously prejudice U.S. tariff and trade negotiations or other initiatives; and (5) whether the activity involved items subject to quantitative import controls or inverted tariffs. We found that two of the companies responded partially to a question soliciting data on annual current and planned production capacity for the proposed FTZ activity. In addition, one of these companies did not respond to a question regarding whether the production activity would result in significant public benefits, taking into account the threshold and economic factors. According to Board staff, applicants may not be able to provide the quantitative information needed to answer some of the questions. The staff stated that, because the evaluation process does not lend itself to specific calculations, the absence of certain data does not prevent the Board’s evaluation of the application. According to Board staff, the Board’s recommendations are based on the totality of qualitative and quantitative information in the case record. The Executive Secretary posted notices in the Federal Register of the three production applications, pursuant to the Board’s procedures, and received public comments on all three. One application received two comments from a domestic textile producer that opposed the application. Another application received three comments—two from a domestic textile producer and one from domestic textile industry trade associations—opposing the application and received a fourth comment— from a domestic textile producer—supporting it. The third application received 14 comments from domestic textile producers, textile organizations, and congressional and city government officials, among others. Twelve of the 14 comments supported the application; the remaining 2 comments, from the same domestic producer, opposed it. Board staff requested that industry specialists review one of the three production applications, although the Board’s procedures do not require such reviews, according to Board staff. In a memo from Commerce’s OTEXA, a specialist who reviewed the application recommended not approving it because the textile components that the company had planned to import into the FTZ were also produced domestically by other manufacturers. In addition, the memo stated that granting the company’s request for FTZ production authority would provide a significant incentive to use imported textile materials rather than textile materials produced domestically, which could have negative economic effects on domestic producers and companies supplying the production components. For the other two applications—both related to the production of carbon and other fiber with foreign-source components—the Board staff did not seek comments from industry specialists and initiated their own industry research instead. According to Board staff, they did not reach out to OTEXA because OTEXA had recently provided comments on a similar carbon fiber case. The Board staff did not request that other agencies review the three applications. CBP’s local Port Director reviewed all three production applications and responded that it could provide oversight of the proposed activities. Phase 2: Analysis and Recommendation We found that Board staff followed the Board’s procedures in reviewing comments and other relevant factors for the three production applications and providing recommendations to the Board regarding approval of the applications. Review of Comments and Other Relevant Factors Our review of Board case records for the three applications found that in evaluating the applications, Board staff considered the public comments submitted in response to the Federal Register notices as well as comments from industry specialists and CBP. In addition, although the case records did not document consideration of all required criteria for two of the three applications, we concluded after interviewing Board staff that they had considered the required criteria. The procedures do not require Board staff to document consideration of the required criteria. The case records we reviewed also showed that Board staff considered the authorization decisions of recent applications involving similar foreign- source components. Examiner’s Reports and Recommendations We found that the Board staff issued preliminary recommendations and subsequently prepared detailed examiner’s reports, with final recommendations, for the three production applications. For two of the applications, the examiner preliminarily recommended authorizing one of the requested production activities with a restriction, namely, requiring that the final product be re-exported and not sold on the U.S. market. For the third application, the examiner preliminarily recommended, on the basis of the OTEXA specialist’s analysis, not approving the request for expanded FTZ production authority. The Board staff also prepared reports with final recommendations for the Executive Secretary’s review, taking into account new evidence and rebuttals that the applicants had submitted in response to opposing public comments. The final recommendations proposed by the Board staff were identical to the preliminary recommendations. For the two applications that received final recommendations to authorize with restrictions, the examiner’s reports stated that an authorization without restrictions would negatively impact a domestic producer and that the applicants had not demonstrated a causal link between proposed FTZ-related cost savings and an overall net positive national economic effect, among other reasons. For the application that the industry specialist had reviewed, the examiner’s report stated that, after reviewing all comments and information on the case record, OTEXA’s position continued to be that approving FTZ production authority in this circumstance, given the domestic supply of required textile materials, would encourage the use of imported textiles and reduce purchases from domestic producers, which could cause domestic production to decline. Phase 3: Authorization Decision Our review of the case records for the three production applications found that the Executive Secretary submitted the examiner’s reports and recommendations to CBP for review and comment and to the Board members for their respective votes, pursuant to the Board’s procedures and regulations, and that the applicants were notified of the Board’s decisions. We also found that all three applications took longer than the general 12-month time frame detailed by the regulations. Executive Secretary’s Recommendation and Board’s Decision We found that CBP reviewed, and concurred with, the examiner’s recommendations for all three applications. The Executive Secretary submitted copies of his memos for each of the three applications, along with the examiner’s reports and recommendations, to both the Treasury and Commerce board members. The memos recommended authorizing with restrictions two of the applications and not authorizing the third application, in agreement with the examiner’s recommendations. In addition, the Executive Secretary’s memo to the Board regarding the application that OTEXA had reviewed stated that, as with recent cases involving textile-based production components, the content of OTEXA’s memorandum established a key basis for the final recommendation for the Board’s action. The Board members unanimously concurred with the Executive Secretary’s recommendations for all three applications. For all three applications, the Board staff notified the applicants of the Board members’ decisions. Board staff developed the examiner’s preliminary recommendation within the general 150-day time frame cited in the Board’s procedures for one of the three applications we reviewed and took additional time for the other two applications. Each of the three applications involved textiles related to foreign-source components, which our review of the case records showed can be controversial. For the three applications, the examiner took 116, 235, and 431 days, respectively, to complete the preliminary recommendations. In addition, the Board’s evaluation of each of the three applications that we reviewed took longer than the general 12-month time frame detailed by the regulations; however, the regulations state that processing a case may take longer when it involves a controversial or complex issue. Processing the three applications took approximately 18, 28, and 28 months, respectively, from the dates when the Board received the applications to the dates when the applicants were notified of the Board’s decisions. For all three applications, preliminary recommendations to either authorize with restrictions or not authorize led to the submission of additional evidence by the applicants, opposition and support by various parties through public comments in response to the Federal Register notices, and the applicants’ rebuttals of public comments. For example, Board staff said that for one of the applications, the OTEXA specialist who reviewed it asked the Board staff to request additional information from the applicant to facilitate analysis of the potential impact of the proposal. The applicant took more than 3 months to provide the information. After the specialist and the Board staff reviewed the additional information, a preliminary negative recommendation was rendered, which necessitated opening an additional public comment period. An opposing party requested an extension of that comment period. After the extended comment period ended, the Board staff said that it allowed a public comment period for rebuttal comments. According to Board staff, another application that we reviewed involved somewhat similar sets of complex circumstances. Board staff noted that these two applications each involved a complex set of circumstances that needed to be carefully and thoroughly reviewed. Lack of Consistent Documentation Made It Difficult to Verify the Board Considered All Required Criteria for Applications We Reviewed While the Board’s procedures and regulations do not call for staff to document their consideration of all criteria required by section 400.27 of the regulations, the absence of such documentation for two of the three applications we reviewed made it difficult to verify that the Board had considered all of these criteria when evaluating the applications. For example, the examiner’s report for one of these two applications did not include documentation to demonstrate that the Board staff had considered the required threshold factors. Also, the reports for the two applications did not include documentation that the staff had considered several of the required economic factors, including (1) retention or creation of value-added activity, (2) extent of value-added activity, and (3) overall effect on import levels of relevant products. The records for all three applications included documentation of consideration of the proposed production activity’s potential significant public benefits. Board staff and the Executive Secretary explained in interviews and in written responses to our questions how they had considered all the required threshold and economic factors and any significant public benefits when evaluating the three applications we reviewed. The examiner’s reports for the two applications did not include documentation indicating the Board staff’s rationale for selecting criteria as relevant. According to Board staff, each examiner’s report includes information that is most relevant to the analysis of the case. Each report also provided a narrative discussing the criteria that the Board staff considered relevant and that supported the recommendation, and each report explained the rationale for the Board staff’s decision to recommend authorizing with restrictions or not authorizing the production activity. According to the Board staff, because only the most relevant criteria are included in the examiner’s report, not all of the threshold and economic factors are explicitly documented. According to Standards for Internal Control in the Federal Government, management should clearly document internal control and all transactions and other significant events in a manner that allows the documentation to be readily available for examination. If management determines that a criterion is not relevant, management should support that determination with documentation that includes its rationale. Without such documentation in the examiner’s reports, Board members lack readily available written assurance that the recommendations reflect consideration of all of the required criteria and that its decisions comply with U.S. trade and tariff laws and policy that has been formally adopted by the executive branch. In addition, such documentation would provide an institutional record of the examiner’s consideration of all the required criteria. According to Board staff, the examiner’s reports may contain varying levels of discussion on each criterion, depending on the specific circumstances of the application. Board staff stated that the criteria listed in section 400.27 of the regulations form the framework and basis of the analysis in each examiner’s report, although the analysis and discussion in the reports may not refer directly to each economic factor. With respect to the examiner’s report that contained no documentation of the consideration of the threshold factors, the Board staff stated that their consideration of the economic factors had indicated that the application should be denied and had formed the basis of the report’s recommendation. The recommendation and the Board’s decision would not be affected by including in the report a discussion of the threshold factors, according to the Board staff. In addition, Board staff stated that the extent to which the examiner’s reports discuss specific pieces of evidence can vary depending on the relevance and significance of each piece of evidence to determining whether the applicant has met the burden of proof for approval under the regulatory factors or criteria. The Board staff also noted that the extent to which the examiner addresses each piece of evidence is generally a subject of discussion with the Executive Secretary during the drafting of the report. Only by interviewing Board staff, in conjunction with our review of the case records, were we able to determine that the Board had considered all of the required criteria when making its recommendations to authorize (with or without restrictions) or not authorize an application for production authority. Conclusions The Board has procedures that generally align with the regulations for evaluating production notifications and applications for production authority, and our review of FTZ sample cases and interviews with Board staff and other relevant agencies found that the Board followed these procedures. The Board regulations include criteria that the Board is required to consider during its review of an application for production authority. However, the examiner’s reports we reviewed did not consistently include documentation demonstrating that the examiner considered all required criteria before recommending whether the applications should be authorized. While not required by the Board regulations and procedures, such documentation would provide the Board members readily available written assurance that the recommendations reflect consideration of all of the required criteria and that its decisions comply with U.S. trade and tariff laws. In addition, such documentation would provide an institutional record of the examiner’s consideration of all the required criteria. Recommendation for Executive Action The Secretary of Commerce, as Chairman of the FTZ Board, should ensure that the Board’s Executive Secretary incorporates into its procedures a requirement that each examiner’s report document Board staff’s consideration of all required criteria listed in section 400.27 of the regulations during evaluations of applications for production authority. (Recommendation 1) Agency Comments We provided a draft of this report to Commerce, Treasury, and the Department of Homeland Security for review and comment. Commerce provided written comments, which are reproduced in appendix V. In its comments, Commerce concurred with our recommendation and stated that it had taken action to address it. In addition, Commerce and Treasury provided technical comments, which we incorporated as appropriate. The Department of Homeland Security stated by email that it had no comments about our draft report. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretaries of Commerce, the Treasury, and Homeland Security and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8612 or gianopoulosk@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Appendix I: Objectives, Scope and Methodology This report examines (1) the extent to which the Foreign-Trade Zones Board (the Board) has established and followed procedures aligned with its regulations for evaluating production notifications and (2) the extent to which the Board has established and followed procedures aligned with its regulations for evaluating production applications. To examine the extent to which the Board has established procedures aligned with its regulations for evaluating production notifications and applications, we reviewed and compared the Board’s 2012 regulations to the Board’s staff internal procedures. In conducting this analysis, we also identified procedures that the Board is required to follow in evaluating notifications and applications. We interviewed Board staff, industry specialists in the Department of Commerce (Commerce), and officials from the Department of the Treasury (Treasury) and the Department of Homeland Security’s Customs and Border Protection (CBP) to identify their roles in the evaluation of notifications and applications and to clarify the regulations’ requirements and the Board’s internal procedures. To examine the extent to which Board staff followed the Board’s procedures when evaluating production notifications and applications, we selected and analyzed a nongeneralizable sample of case records for 59 of the 293 production notifications submitted to the Board from April 2012 through September 2017. We selected this time period to ensure that the sample reflected the Board’s activities between April 2012—when, according to staff, the Board began implementing regulations that it had modified in February 2012—and the end of fiscal year 2017. To select our sample of 59 notifications, we first selected 10 of the 13 notifications submitted during the selected time period that were not approved by the Board. We did not select the remaining 3 notifications that were not approved, because the companies that submitted those notifications subsequently submitted production applications and the Board’s decisions about the applications were pending when we made our selection. The notifications that were not approved were submitted by companies in seven industry categories—silicones/polysilicon, textiles/footwear, oil refineries/petrochemical facilities, other energy, chemicals, medical supplies and devices and miscellaneous. For each of these seven categories, our sample of 59 notifications includes all notifications for which the Board had rendered decisions at the time of our selection and excludes any for which decisions were pending. Our sample does not include six production notifications submitted by companies in the textiles/footwear industry category that the Board did not approve or approved with restrictions, because those companies subsequently submitted applications. Our final sample of 59 notifications includes all three types of Board decisions (34 approved, 15 approved with restrictions, and 10 not approved). However, because of its size, our final sample is not generalizable to all notifications submitted from April 2012 through September 2017. We also selected and analyzed three production applications, respectively submitted by three companies that submitted 3 of the 59 notifications we analyzed. These three applications were the only applications that the Board reviewed and rendered final decisions on from April 2012 through September 2017. We analyzed case records containing documents that companies submitted when they filed their production notifications and applications; information collected by Board staff from public comments in response to Federal Register notices; comments from industry specialists at Commerce, CBP, and the Department of Justice; and reports prepared by Board staff, documenting their analyses and recommendations for each notification and application. To conduct a systematic assessment of the case records, we created a data collection instrument to determine, among other things, whether the applicant submitted all required information for each notification and application. In addition, at least two analysts, including an economist, independently reviewed each case record; any resulting disagreements were resolved through discussion among team members and, as appropriate, with Board staff. Further, we collected and analyzed data for these cases on the types of Board decisions (approved, approved with restrictions, and not approved); the extent of public comments received for both notifications and applications; the extent of industry specialists’ and CBP’s comments; the types and amount of notification restrictions; and whether the duration of the Board’s evaluations was within the time frames detailed in the Board’s regulations and procedures. We also determined the extent to which the recommendations of the Board’s analysts, Commerce’s industry specialists, the Board’s Executive Secretary, and Board members were in agreement. We determined that the case records data we reviewed, which we obtained from the Board’s case tracking system, were sufficiently reliable for our purposes of understanding the universe of notifications and applications submitted for production authority and reviewing a sample from that universe. To make this determination, we took steps that included reviewing related documentation guidance for the Board’s case records tracking system; interviewing knowledgeable agency officials; and reviewing a sample of cases with our data collection instrument, which confirmed information included in the case tracking system data. Further, we analyzed the extent to which Board staff considered all required threshold and economic factors and any significant public benefits for the three applications in our sample. While neither the Board’s regulations nor its procedures require Board staff to document consideration of all required threshold and economic factors and significant public benefits, as detailed in section 400.27 of the regulations, during their evaluations of production applications, Standards for Internal Control in the Federal Government calls for such documentation. To conduct this analysis, we reviewed the examiner’s reports for all three applications and interviewed Board staff to determine whether the examiner had considered all of the required criteria. We cannot generalize or extrapolate our analysis for the three applications to all notifications and applications submitted to the Board from April 2012 through September 2017. We also interviewed relevant officials from Commerce (including industry specialists), Treasury, and CBP to obtain clarifications regarding some of the notifications and applications in our sample. We conducted this performance audit from July 2017 to November 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Foreign-Trade Zones Board Decisions for All Production Notifications Submitted April 2012–September 2017 From April 2012 through September 2017, the Foreign-Trade Zones Board (the Board) rendered decisions on 293 notifications requesting foreign-trade zones (FTZ) production authority that were submitted by companies in 25 industry categories (see table 2). The Board reported approving 218 notifications (74 percent), approving 62 notifications with restrictions (21 percent), and not approving 13 notifications (4 percent). Nine of the companies whose notifications were approved with restrictions or not approved continued to seek production authority by submitting production applications. Our analysis of the Board’s decisions from April 2012 to September 2017 found the following. The Board approved all production notifications for six industry Auto parts (25 notifications) Pharmaceutical (21 notifications) Other electronics/telecommunications (10 notifications) Metals and minerals (7 notifications) Semiconductors (3 notifications) Oil drilling equipment (2 notifications) According to Board staff, companies in some industry categories, such as auto parts and pharmaceutical, often have long-established records of operating in FTZs. Board staff also stated that many companies in these industry categories submit notifications requesting production authority for items similar to those for which the Board has granted authority in the past. Officials also stated that companies are more likely to submit notifications requesting authorization for certain production activities if other companies have previously received authorization for similar activities. Textiles/footwear was the industry category with the largest number of notifications that were approved with restrictions or not approved. Of the 23 notifications submitted, 14 were approved with restrictions and 4 were not approved. Board staff noted that domestic textile producers that could be affected by authorization of production notifications are often those that oppose approval of the notifications. Of the companies that submitted the 75 production notifications that were approved with restrictions or not approved, 9 companies continued seeking production authority by filing a more detailed production application with the Board. As of August 2018, 2 applications had been authorized with restrictions, 2 applications had not been authorized, and the Board’s decisions were pending for the remaining 5 applications. Appendix III: Rationales for Foreign-Trade Zones Board Decisions for Selected Production Notifications Submitted April 2012–September 2017 We selected and analyzed Foreign-Trade Zones (FTZ) Board (the Board) case records for a nongeneralizable sample of 59 notifications to identify the rationales for the Board’s decisions and the types of restrictions, if any, included in the decisions. Table 3 shows the Board’s decisions for the 59 notifications in our sample, by industry category. Notifications That Were Approved The Board approved production authority for 34 of the 59 notifications in our sample. Our analysis of Board case records found that the Board’s rationale for its decision for 32 of the 34 authorizations fell into one of the following four categories: The Board had previously approved similar production authority for another company (23 notifications). For example, in evaluating a notification requesting production authority for lithium ion batteries and electric vehicle motors, Board staff noted that the Board had approved similar production notifications for other companies in recent years. No opposition or concerns were raised by an industry or industry analyst during the Board’s review of the notification (4 notifications). For example, in evaluating one notification, Board staff noted that no concerns were raised during the public comment period or by Department of Commerce industry analysts. No duty savings would be realized for the finished product of the proposed activity (3 notifications). For example, in evaluating a notification requesting authority to produce finished upholstery grade leather and cut parts, Board staff noted that duties for the finished goods were not lower than the duties for the components (leather hides) required for production. No products of foreign origin would be involved in the proposed activity (2 notifications). For example, in evaluating one notification Board staff noted that the applicant was not requesting the use of any foreign-source steel in the proposed FTZ operations. Notifications That Were Approved with Restrictions Our sample of 59 notifications included 15 cases in which the Board approved production authority with restrictions. Our analysis of Board case records found that the restrictions imposed by the Board fell into one or more of the following six categories. The Board required the company to pay duties on one or more components before importing the component into the FTZ (9 notifications). For example, the Board’s decision for one notification stated that the company must pay duties on certain foreign-origin upholstery fabrics before bringing them into the zone. The Board required the company to pay duties on some or all components brought into the FTZ when transferring the finished product from the zone, even if the components were used in production (8 notifications). For example, for one notification, the Board required the company to pay duties on upholstery leather brought into an FTZ for manufacturing furniture when the furniture left the FTZ. The Board authorized a limited quantity of certain components specified in the notification (6 notifications). For example, the Board decision for one notification limited the square yards of a given fabric that the company was allowed to admit into an FTZ. The Board required the FTZ user to submit additional data and information (6 notifications). For example, the Board decision for one notification required the company to submit supplemental annual report data and information for the purpose of monitoring by Board staff. The Board restricted the duration of FTZ production authority (4 notifications). For example, the Board decision for one notification limited production authority to 5 years. The Board required that a product be re-exported from the zone (not for entry into U.S. market) (1 notification). For this notification, the Board instructed the company to ship all of the foreign upholstery fabric out of the subzone and not ship it into the United States for U.S. consumption. For the 15 notifications that were approved with restrictions, our analysis of Board case records found that the Board’s rationales for its decisions fell into one or more of the following six categories. Similar authority had been approved in the past (6 notifications). For example, in its decision for one notification, the Board noted that a similar authority had been requested by another company and that the authority was granted with a similar restriction. The proposed activity supported U.S.-based production that otherwise would be conducted abroad (4 notifications). For example, in decisions for two notifications, the Board noted that the approved production authority supported domestic U.S. production that otherwise could be (or was being) conducted abroad. The restriction for these notifications concerned the quantity of a fabric that could be brought into the zone duty free. New or complex policy issues were involved (2 notifications). For example, in its decision for one notification, the Board approved the requested production authority for the first time and added a time restriction that would allow the Board to identify any domestic impact. No opposition was raised by domestic industry or by industry analysts (1 notification). In its decision for this notification, the Board noted that industry analysts at Commerce had no concerns as long as the company paid duties on imported fabric components specified in the notification when the finished good left the zone. No duty savings would be realized for the finished product of the proposed activity (1 notification). In its decision for this notification, the Board noted that the applicant had indicated it would pay duties on all foreign-source materials when leaving the zone for sale in the United States. The proposed activity would have no duty-reduction benefit and would help only with logistics or record-keeping (1 notification). In its decision for this notification, the Board noted that production authority had previously been approved with restrictions and that the company had requested a change to the authorization for record- keeping purposes. Notifications That Were Not Approved The Board’s reasons for not approving 10 notifications fell into one or more of the following two categories. New or complex policy issues or concerns were involved (5 notifications). For example, in its decisions for these notifications, the Board noted that (1) it had not previously approved production authority for a given component or a given product, (2) circumstances within the industry and opposition to the production notification continued to evolve, (3) the production process made tracking the source or destination of a given component difficult when it entered or left the FTZ, (4) the component or product involved sensitive trade policy issues, or (5) the economic impacts and potential precedents were unclear. Further review was needed because of domestic industry concerns (8 notifications). For example, in its decisions for these notifications, the Board cited concerns that included the possibility that authorization would put pressure on domestic industries already experiencing low growth and depressed prices and would cause disagreements between the applicant and industry members regarding the domestic availability of an FTZ production component at competitive prices. In addition, for one notification, the Board’s decision rationale stated that, although similar authority had been approved several years earlier, authority was not currently being granted because conditions had changed since the earlier authorization. Appendix IV: Time Frames for Foreign-Trade Zone Board’s Processing of Selected Production Notifications and Applications The Foreign-Trade Zones Board (the Board) regulations establish time frames for evaluating notifications and applications submitted by companies seeking permission to conduct production activities in a foreign-trade zone (FTZ). The regulations require that the Executive Secretary inform the applicant of the Board’s authorization decision within 120 days of receiving the notification. The regulations also state that the general time frame to process applications for production authority is 12 months. We selected and analyzed a nongeneralizable sample of 59 notifications and 3 applications and the Board’s case records to examine, among other things, whether the Board completed its processing of these notifications and applications within the time frames detailed in the Board’s regulations. We found that the Board generally followed the 120- day time frame for the majority of the 59 notifications in our sample but, for all 3 applications that we reviewed, took longer than the general 12- month time frame set in the regulations for the applications. According to the regulations, additional time may be required to process applications that involve a complex or controversial issue. Notification Processing Time Frame The Board generally completed its processing of the 59 notifications we reviewed within the time frames detailed in the regulations. Eight cases were completed in less than 120 days, with time frames ranging from 21 to 119 days. Twenty-five cases were completed in exactly 120 days. In 13 cases, the 120th day fell on a weekend or a holiday and the review was completed on the next business day. Another 13 cases were delayed and completed in 122 to 160 days. According to Board staff, processing 5 of these 13 notifications exceeded the 120-day time frame because of a government shutdown. In addition, according to the Board staff, processing 8 of the 13 notifications exceeded the 120-day time frame because of internal procedural delays, such as an industry specialist’s needing more time to analyze a notification. Of those 8 notifications, 7 were submitted by companies in the textiles/footwear industry and the eighth was submitted by a company in the “other energy” industry category. The time that the Board took to complete processing (i.e., finish its evaluations and inform applicants of its decisions) for the 59 notifications we reviewed varied by industry category (see table 4). For example, the Board informed all of the applicants that submitted notifications in the chemical, medical supply and device, and silicone/polysilicon industry categories of its decisions within 120 days or within 120 days plus the next business day. However, for 7 of 17 notifications from companies in the textiles/footwear industry category, the Board informed applicants of its decisions after the 120-day period. Application Processing Time Frame The Board’s processing of each of the three applications in our sample took longer than the general 12-month (365 days) time frame set in the regulations. Processing of the three applications took 558, 866, and 864 days, respectively, from the date when Board received the application to the date when the applicant was notified of the Board’s decisions. For all three applications, the Board issued preliminary recommendations either to approve with restrictions or not to approve the requested production authority. These preliminary decisions led to the submission of additional evidence, rebuttals to additional evidence, and opposition and support by various parties, which extended the time needed for final decisions by the Board members. The regulations state that evaluating an application may take longer when it involves a controversial or complex issue. The three applications we reviewed involved textile-related foreign components, which the case records and our interviews with Board officials showed can be controversial. For the three applications, completing certain steps delayed Board staff’s processing of the applications, causing it to exceed the general time frame set in the regulations. For example, the regulations state that the examiner shall generally develop recommendations and submit a report within 150 days after the end of the public comment period. For the three applications, the examiner took 116, 235, and 431 days, respectively, to complete the preliminary recommendations. According to Board staff, processing two of the applications took longer than the general time frame because of a complex set of circumstances that called for careful and thorough review. In addition, under the regulations, once the Executive Secretary has circulated the examiner’s report, the Department of the Treasury (Treasury) Board member is generally expected to return a vote within 30 days. For the three applications we reviewed, Treasury took 26, 90, and 212 days, respectively, to return a vote. A Treasury official also stated that before rendering a decision about two applications requesting the same type of authorization, Treasury waited for Board staff to complete its review of both applications. The Treasury official stated that he held substantial discussions with Board staff about each of the three applications before reaching a decision. Appendix V: Comments from the Department of Commerce Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Christine Broderick (Assistant Director), Barbara R. Shields (Analyst-in-Charge), Claudia Rodriguez, Pedro Almoguera, Martin de Alteriis, Grace Lui, Reid Lowe, and Christopher Keblitis made key contributions to this report. Other contributors include Lilia Chaidez, Philip Farah, Peter Kramer, and Julia Robertson.
Why GAO Did This Study FTZs allow companies to reduce, eliminate, or defer duty payments on foreign goods imported into FTZs for distribution or as components of other products before transferring the finished goods into U.S. commerce or exporting them overseas. The value of foreign and domestic goods admitted to FTZs in 2016 exceeded $610 billion. Responsibilities of the Board, consisting of officials from the Departments of Commerce (Commerce) and the Treasury, include evaluating production notifications and applications on the basis of factors such as the proposed activity's net effect on the U.S. economy. Federal regulations set forth requirements, pursuant to the Foreign-Trade Zones Act of 1934, for these evaluations. GAO was asked to review the Board's evaluation processes. This report examines the extent to which the Board has established and followed procedures aligned with regulations for evaluating (1) notifications and (2) applications. GAO analyzed the Board's regulations and procedures and interviewed Commerce, Treasury, and U.S. Customs and Border Protection officials. GAO also analyzed a nongeneralizable sample of 59 of 293 notifications the Board evaluated from April 2012 through September 2017, which GAO selected to include a range of Board decisions and exclude pending decisions. GAO also analyzed all three applications the Board issued decisions on during that period. What GAO Found The U.S. Foreign-Trade Zones Board (the Board) has procedures that generally align with its regulations for evaluating production notifications and followed these procedures for all 59 notifications GAO reviewed. Notifications are filed by companies proposing to bring foreign components into a foreign-trade zone (FTZ) for use in manufacturing finished products, among other purposes. GAO found, for example, that, following Board procedures, Board staff evaluating the notifications collected and considered comments from the general public, industry specialists, and U.S. Customs and Border Protection and recommended to the Board whether to authorize companies' proposed activities. Of the 59 notifications GAO reviewed for seven industry categories, 49 notifications either were approved or were approved with restrictions—for example, the proposed activity was authorized for a limited time period or certain duty benefits were denied for one or more foreign components. Ten notifications were denied for reasons such as new or complex policy issues that required further review. The Board also has procedures that generally align with its regulations for evaluating production applications and followed these procedures for the three applications GAO reviewed. The applications were submitted by three of the companies whose notifications were denied. According to Board staff, if a notification is not approved or is approved with restrictions, a company may submit an application with additional details. Following Board procedures, Board staff, for example, collected and considered comments and recommended to the Board whether to authorize the proposed activities. Two of the applications were approved with restrictions, and the third was not approved. While the regulations require consideration of a number of criteria—for example, consistency with U.S. trade and tariff law—Board staff did not document consideration of all required criteria for two of the three applications, and the procedures do not require such documentation. Board staff said they document only the most relevant criteria in their reports. Standards for Internal Control in the Federal Government states that management should document its rationale for determining a criterion is not relevant and make this documentation readily available for examination. Without such documentation, the Board lacks an institutional record that all required criteria were considered and also lacks assurance that its decisions comply with U.S. trade and tariff law and public policy. What GAO Recommends Commerce should require Board staff to document consideration of all criteria required in the regulations when evaluating production applications. Commerce concurred with this recommendation.
gao_GAO-18-386
gao_GAO-18-386_0
Background Medical Record Requests Patients may request copies of their medical records, or request that copies of their records be sent to a designated person or entity of their choice. In a patient request, a patient or former patient requests access to or copies of some or all of her medical records, in either paper or electronic format. For example, a patient might want to keep copies for her own personal use or to bring with her when moving or changing providers. In a patient-directed request, a patient or former patient requests that a provider or other covered entity send a copy of the patient’s medical records directly to another person or entity, such as another provider. For example, a patient might request that her medical records be forwarded to another provider because the patient is moving or wants to seek a second opinion. In a third-party request, a third party, such as an attorney, obtains permission from a patient (via a HIPAA authorization form that is signed by the patient) to access the patient’s medical records. For example, with permission from the patient, a lawyer might request copies of a patient’s medical records to pursue a malpractice case. HIPAA’s Privacy Rule—the regulations that implement HIPAA’s privacy protections—requires that upon request, HIPAA-covered entities, such as health care providers and health plans, provide individuals with access to their medical records. Under HIPAA’s implementing regulations, providers and other covered entities must respond to a patient or patient- directed request for medical records within 30 days. The Privacy Rule also establishes an individual’s right to inspect or obtain a copy of his or her medical records which, as amended in 2013, includes the right to direct a covered entity to transmit a copy of the medical records to a designated person or entity of the individual’s choice. Individuals have the right to access their medical records for as long as the information is maintained by a covered entity or by a business associate on behalf of a covered entity, regardless of when the information was created; whether the information is maintained in paper or electronic systems onsite, remotely, or is archived; or where the information originated. Finally, the HIPAA Privacy Rule also describes the circumstances under which protected health information in medical records may be released to patients and third parties. In February 2016, OCR issued guidance to explain its 2013 regulations. Among other things, this guidance states that as part of a patient’s right of access, patients have the right to obtain copies of their medical records and the right to have their records forwarded to a person or entity of their choice; in these circumstances, patients are only to be charged a “reasonable, cost-based fee.” The guidance further notes that state laws that provide individuals with greater rights of access to their medical records are not preempted by HIPAA and still apply. With respect to fees, patients may not be charged more than allowed under the Privacy Rule, even if state law provides for higher or different fees. Fulfilling Medical Record Requests To respond to medical record requests, providers either use staff within their organization or may contract with ROI vendors to conduct this work. In general, both providers’ staff and ROI vendors follow the same process when fulfilling requests for medical records for both individual patients and third parties. (See fig. 1.) Available Information Suggests That Fees for Accessing Patient Medical Records Vary by Type of Request and State Available information suggests that the allowable fees for accessing medical records vary by type of request—that is, whether a patient or third party is making the request—and by state. Federal laws establish limits on the fees that may be charged for two of the three types of requests for medical records: (1) patient requests, when patients request access to their medical records, and (2) patient-directed requests, when patients request that their records be sent to another person or entity, such as another provider. HIPAA does not establish limits on fees for third-party requests. For patient and patient-directed requests, providers may charge a “reasonable, cost-based fee” under HIPAA’s implementing regulations. OCR’s 2016 guidance gives examples of options providers (or a ROI vendor responding to requests for medical records on behalf of a provider) may use in determining a “reasonable cost-based fee.” (See table 1.) In addition to the HIPAA requirements, some states have established their own fee schedules, formulas, or limits on the allowable fees for patient and patient-directed requests. State laws that allow for higher fees than permitted under HIPAA are preempted by the federal law, but those providing for lower fees are not preempted. Representatives from ROI vendors, provider representatives, and other stakeholders we interviewed told us that not all states have established their own requirements governing the fees for medical record requests and, among the states that have, the laws can vary. For example, states can vary as to whether they set a maximum fee that may be charged or whether they establish a fee schedule that is applicable to paper records, electronic records, or both. While states may establish per-page amounts that can be charged for a copy of a patient’s medical records, these per-page amounts can vary. In contrast with patient and patient-directed requests, the fees for third- party requests are not limited by HIPAA’s reasonable, cost-based standard for access requests and are instead governed by state laws, regulations, or other requirements. For third-party requests, providers and vendors working on their behalf may charge whatever is allowed under these state requirements. According to ROI vendors and other stakeholders we interviewed, such fees are typically higher than the reasonable, cost-based fees permitted under HIPAA for patient and patient-directed requests and may be established by formulas that vary by state. For example, states can vary as to whether they establish per- page copy fees, allow providers to charge a flat fee, or charge different fees based on the type of media requested (e.g., electronic copies, X- rays, microfilm, paper, etc.). Additionally, state laws of general applicability (for example, the commercial code) may govern the permissible fees applicable to ROI release of records. Representatives of ROI vendors we interviewed stated that there is significant variation in the state laws that govern the fees for third-party requests, and companies employ staff to track the different frameworks. Across the four selected states, we found examples of the kinds of variation stakeholders have described in the allowable fees for patient and third-party requests for medical records. (See table 2.) Three of the states— Ohio, Rhode Island, and Wisconsin—have established per-page fee amounts. The amounts charged are based on the number of pages requested and vary across the three states. These three states have also established specific fee rates for requesting media such as X-ray or magnetic resonance imaging scan images. One state—Ohio—has established a different per-page fee amount for third-party requests. The other three states have not established different fees for different types of requests (i.e., between patient and third-party requests). One state—Rhode Island—specifies a maximum allowable fee if the provider uses an electronic health records (EHR) system for patient and patient-directed requests. One state—Kentucky—entitles individuals to one free copy of their medical record under state law. The statute allows a charge of up to $1 per page for additional copies of a patient’s medical records. In some cases, questions have been raised about the fee structure that should be applied to certain types of requests. Representatives from ROI vendors we interviewed told us that they have seen an increase in third parties (primarily law firms) submitting requests for medical records and indicating that the requests are patient-directed and therefore subject to HIPAA’s reasonable, cost-based fee standard. According to these representatives, it is sometimes difficult for them to determine whether it is an attorney making a third-party request or an attorney submitting a patient-directed request because, for example, patient-directed requests are submitted by a patient’s attorney and appear similar to traditional third-party requests (e.g., they appear on legal letterhead). As a result, the representatives said that they are often unsure about which fee structure to apply to the request: a reasonable, cost-based fee or a fee for a third-party request, which ROI vendors told us is typically higher. When asked about the reported distinction between fees for patient- directed and third-party requests, OCR officials told us that they are in the process of considering whether any clarification is needed to their 2016 guidance. This guidance describes the requirements of HIPAA and the Health Information Technology for Economic and Clinical Health (HITECH) Act, as well as their implementing regulations. HIPAA provides patients with a legally enforceable right of access to their medical records. OCR officials explained that the HITECH Act amended HIPAA and specifies that a patient’s right of access includes the right to direct a provider to transmit the records directly to an entity or individual designated by the individual. According to OCR officials, the same requirements for providing a medical record to an individual, such as the limits on allowable fees and the format and timeliness requirements, apply to patient-directed requests. OCR officials told us that they are considering whether—and if so, how—they could clarify the 2016 guidance within the constraints of HIPAA and the HITECH Act. Stakeholders Identified Fees and Other Challenges for Patients Accessing Medical Records and Challenges for Providers in Allocating Resources to Respond to Requests Patient advocates and others we interviewed described challenges patients face accessing medical records, such as high fees. Provider representatives described challenges providers face, including allocating staff time and other resources to respond to requests for medical records. Patient Advocates and Other Stakeholders Described High Fees for Obtaining Medical Records, While Providers and Patients May Be Unaware of Patients’ Access Rights Multiple stakeholders we interviewed—patient advocates, a provider representative, experts, and a representative from an ROI vendor—told us that some patients have incurred high fees when requesting access to their medical records. Stakeholders noted that in some cases the fees reported by patients appear to exceed the reasonable, cost-based standard established under HIPAA. One patient advocacy organization, which collects information on patients’ access to their medical records, described the following examples reported to them by patients: Two patients described being charged fees exceeding $500 for a single medical record request. One patient was charged $148 for a PDF version of her medical record. Two patients were directed to pay an annual subscription fee in order to access their medical records. One patient was charged a retrieval fee by a hospital’s ROI vendor for a copy of her medical records. Retrieval fees are prohibited under HIPAA. In addition, according to patient advocates we interviewed, high fees can adversely affect patients’ access to their medical records. For example, one patient advocate told us that some patients simply cancel their requests after learning about the potential costs associated with their request. Another patient advocate told us that patients are often unable to afford the fees charged for accessing their medical records, even in cases when the fees are allowed under HIPAA or applicable state law. This advocate explained that per-page fees, even if legally authorized, can pose challenges for patients; in particular, patients who have been seriously ill can accumulate medical records that number in the thousands of pages and can, as a result, face fees in excess of $1,000 for a single copy of their records. Stakeholders we interviewed told us that in many cases, providers may also be unaware of patients’ right to access their medical records and the laws governing the fees for doing so. Two patient advocates and an expert said that patients are sometimes denied access to their medical records. Patient advocates and experts told us that some providers are not aware of the 2016 OCR guidance, which describes patients’ rights to access their medical records, as well as the permitted fees for such access. One patient advocate and a provider representative also noted that providers may be confused about caregivers’ and family members’ access to medical records. For example, providers sometimes incorrectly deny family members’ access to a patient’s health information, which HIPAA allows under certain circumstances. Provider representatives, patient advocates, and an expert agreed that providers could benefit from more training on medical record access issues, including training on the options patients have for accessing their medical records. Stakeholders we interviewed also noted that patients themselves are not always aware of their right to access their medical records, do not always know that they can submit a formal complaint to HHS’s OCR when denied access, and could benefit from specific educational efforts that raise awareness of these issues. For example, patient advocates said that the “notice of privacy practices” form that patients receive and are asked to sign when they first seek care from a provider could be improved to raise awareness of the rights associated with accessing medical records. This form is used to explain a provider’s privacy policies and obligations, and what patients have to do to obtain access to their medical records. However, a provider association and an expert told us that these forms are not always easy for patients to understand, and patients might not always read them. OCR has developed a standard privacy notice that providers may adopt if they choose. However, a patient advocate told us that most providers are still using their own versions of the notice. Provider Representatives and Other Stakeholders Described Challenges of Allocating Staff Time and Other Resources, While Technology Has Improved Patients’ Ability to Access Records Multiple stakeholders we interviewed told us that responding to patient requests for medical records can be challenging because it requires the allocation of staff and other resources and as a result, responding to such requests can be costly. Furthermore, a provider representative, three representatives from ROI vendors, and a patient advocate confirmed that providers and their staff may lack the expertise needed for responding to requests for medical records in a manner that complies with HIPAA and applicable state laws. Providers can receive training on HIPAA related issues; however, a patient advocate told us that this training, which may be provided by private companies, often focuses on security issues (i.e., maintaining secure medical record systems) and not on the rights of patients. In addition, stakeholders we interviewed commonly stated that the increased use of electronically stored health information in EHRs has resulted in a more complex and challenging environment when responding to requests for patients’ medical records. For example, these stakeholders noted the following: Extracting medical records from EHRs is not a simple “push of a button” and often requires providers or their ROI vendors to go through multiple systems to compile the requested information. Stakeholders noted that printing a complete record from an EHR system can result in a document that is hundreds of pages long due to the amount of data stored in EHR systems. Representatives from three ROI vendors told us that as providers have transitioned from using paper records to using EHR systems, information has been scanned into electronic medical records. This has, in some cases, resulted in records being incorrectly merged (e.g., the records of two patients merged into a single record). As a result, when responding to a medical record request, providers or their vendors must carefully go through each page of the record to ensure only the correct patient’s medical records are being released. A provider representative, representatives from four ROI vendors, and two experts noted that providers often have multiple active EHR systems, or have legacy EHR systems in which some medical records are stored. This requires providers and their vendors to go through multiple EHR systems to extract information in response to a medical record request. Some providers still have a mix of paper and electronic records, which ROI vendors and provider representatives told us makes responding to medical record requests more difficult and time consuming. A provider representative and other stakeholders said that while patients can request copies of their records in an electronic format, providers may have security concerns about sending information via unsecured email or providing electronic information via a patient’s USB stick, which increases the risk of a provider’s system becoming infected with malware. While health information technology has created some challenges for providers, numerous stakeholders we interviewed told us that the technologies have made accessing medical records and other information easier and less costly for patients. For example, multiple stakeholders we interviewed told us that an increase in the use of patient portals has reduced the number of patient requests for access to their medical records because patients are able to directly access some health information through the portals. As we have previously reported, patient portals have facilitated patient access to medical records and patients have noted the benefits from having such electronic access, even though portals do not always contain all the information patients need. The use of patient portals has not eliminated patient requests for access to their medical records; a provider representative we interviewed said that many patients still prefer to obtain paper copies of their records. OCR Investigates Complaints, Audits Providers, and Educates Patients and Providers about Patient Access To enforce patients’ right of access under HIPAA’s Privacy Rule, the HHS OCR undertakes four types of efforts. OCR (1) investigates complaints it receives from patients and others regarding access to patient medical records, (2) audits a sample of providers to determine the extent to which their policies and procedures are compliant with HIPAA, (3) reports to Congress on compliance with HIPAA, and (4) educates patients and providers about patients’ rights to access their medical records. Investigation of Patient Complaints OCR has established a process for investigating patients’ complaints over access to their medical records. Via an online portal on its website, OCR receives complaints submitted by patients. Staff in OCR’s headquarters office conduct an initial review of the information provided by the complainant. According to OCR officials, complaints that cannot be immediately resolved are generally assigned to a regional office investigator, who is responsible for reviewing the complaint and obtaining additional information from the complainant and provider, if needed. After the investigator completes the investigation, OCR issues a letter to both the provider and patient explaining what OCR has found. Depending on the nature of the findings, OCR may, for example, issue technical assistance to the provider; close the complaint without identifying a violation; require the provider to implement a corrective action plan; conduct a more detailed investigation; and, if warranted, levy a civil monetary penalty. According to OCR officials, the use of civil monetary penalties is rare and reserved for situations where providers’ behavior is particularly egregious. Examples of patient access complaints provided to us by OCR included complaints about the following: providers not responding even after the patient made multiple requests, or providers taking longer than 30 days to respond to a request for medical records or other information ; providers charging excessive fees for copies of patients’ medical providers not responding to requests from personal representatives or providers denying medical records requests from a parent or parents of children. Our analysis of OCR data also shows that the amount of time OCR takes to investigate and close a patient access complaint varies. OCR received a total of 583 patient access complaints between February 2016 and June 2017, closing 437 of these complaints during that same time period. These 437 complaints took anywhere from 11 to 497 days to close. (See fig. 2.) The majority of these 437 complaints (63 percent) were closed in 200 or fewer days. OCR officials stated that while there is no required time frame for closing a complaint involving patients’ access to their medical records, they aim to close cases in fewer than 365 days. According to OCR officials, while there is no required time frame for closing a patient access case, investigators aim to get patients access to their medical records as soon as possible, which typically occurs before the case is formally closed (i.e., a formal letter is issued to provider and patient). OCR officials noted a number of reasons why complaints can take a significant amount of time to close. In some cases, the patient receives her records early in the investigation, but the complaint is kept open by OCR to ensure that agreed-upon or recommended corrective actions are taken by the provider—for example, training staff on patient access rights or demonstrating that the provider’s policies pertaining to patient access have been changed. In other complaints, time is needed for OCR to obtain consent from the patient who filed the complaint. OCR officials noted that in some instances, patients ultimately decide they do not want to give OCR consent to investigate their complaint, due to concerns that the provider will learn their identity. OCR officials also noted that complaints that are moving towards more serious enforcement actions, such as civil monetary penalties, may also take a long time to close. Finally, OCR officials noted that their own staffing limitations in regional offices can sometimes result in complaints taking additional time to close. OCR Audits The HITECH Act requires OCR to conduct periodic audits of selected covered entities in order to review the policies and procedures the covered entities have established to meet HIPAA requirements and standards. The right of patients to access their medical records is included in these requirements. As part of its most recent audit, OCR officials stated that they reviewed 103 covered entities regarding their policies related to patient access to health information, including the entities’ notice of privacy practices. In addition, OCR reviewed any access requests the covered entities received from patients, including both requests that were granted and requests that were denied. OCR examined these access requests to determine whether access was provided in a manner that was consistent with the covered entities’ policies and procedures and whether the entities fulfilled the requests they received within the 30-day time frame established under the Privacy Rule. OCR also examined any fees that were charged for access and whether those fees met HIPAA’s reasonable, cost-based standard. OCR officials said that after completing each audit, OCR submitted a draft report for the audited entity for review. The entity had 10 days to review and submit any feedback to OCR, which OCR reviewed and incorporated into the entity’s final audit report. According to OCR officials, OCR has completed this phase of the audit program and will release a final report in 2018. Annual Report to Congress The HITECH Act directs HHS to submit an annual report to Congress on compliance with HIPAA that includes details about complaints of alleged violations of the Privacy Rule and the resolution of these complaints. The patient right of access is part of the HIPAA and Privacy Rule requirements. The report, which is issued by OCR, includes information on the patient access complaints OCR has received, the number of investigations it has conducted, and the fines OCR has levied. OCR issued its most recent report in 2016. The report summarized complaints and enforcement actions for the 2013 through 2014 calendar years. OCR officials stated that they are in the process of reviewing a draft report that will be released in mid-2018 and contain information and data from calendar years 2015 and 2016. Provider and Patient Education Efforts As part of its responsibilities to enforce HIPAA’s Privacy Rule, OCR also provides a variety of educational materials that aim to educate both patients and providers about patients’ right to access their medical records. These materials include the following: In September 2017, OCR published a pamphlet that aims to educate consumers, particularly caregivers, about patients’ rights to access their medical records, including how to file a complaint if denied access. OCR has worked with ONC to produce three videos (“Your Health Information, Your Rights!”) and an infographic aimed at educating patients and others about patients’ rights to access their medical records. OCR has developed provider education videos that aim to educate providers on the rights of patients to access their medical records and how such access can enable patients to be more involved in their own care. Providers can receive continuing education credits for watching these videos. To assist providers, OCR has worked with ONC to develop a model notice of privacy practices to help providers adequately communicate access rights to patients in a standardized, easy-to-understand way. Agency Comments We provided a draft of this report to HHS for review. HHS provided us with technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Health and Human Services, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or at yocomc@gao.gov. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. Other major contributors to this report are listed in appendix I. Appendix I: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Tom Conahan, Assistant Director; Andrea E. Richardson, Analyst-in-Charge; Krister Friday; and Monica Perez-Nelson made key contributions to this report.
Why GAO Did This Study HIPAA and its implementing regulations, as amended by the Health Information Technology for Economic and Clinical Health Act, require health care providers to give patients, upon request, access to their medical records, which contain protected health information (i.e., diagnoses, billing information, medications, and test results). This right of access allows patients to obtain their records or have them forwarded to a person or entity of their choice—such as another provider—in a timely manner while being charged a reasonable, cost-based fee. Third parties, such as a lawyer or someone processing disability claims, may also request copies of a patient's medical records with permission from the patient. The 21st Century Cures Act included a provision for GAO to study patient access to medical records. Among other things, this report describes (1) what is known about the fees for accessing patients' medical records and (2) challenges identified by patients and providers when patients request access to their medical records. GAO reviewed selected HIPAA requirements and implementing regulations and guidance, and relevant laws in four states selected in part because they established a range of fees associated with obtaining copies of medical records. GAO also interviewed four provider associations, seven vendors that work for providers, six patient advocates, state officials, and Department of Health and Human Services' (HHS) officials. The information GAO obtained and its analysis of laws in the selected states are not generalizable. HHS provided technical comments on this report. What GAO Found Available information suggests that the fees charged for accessing medical records can vary depending on the type of request and the state in which the request is made. Under the Health Insurance Portability and Accountability Act of 1996 (HIPAA) and its implementing regulations, providers are authorized to charge a reasonable, cost-based fee when patients request copies of their medical records or request that their records be forwarded to another provider or entity. In the case of third-party requests, when a patient gives permission for another entity—for example, an attorney—to request copies of the patient's medical records, the fees are not subject to the reasonable cost-based standard and are generally governed by state law. According to stakeholders GAO interviewed, the fees for third-party requests are generally higher than the fees charged to patients and can vary significantly across states. The four states GAO reviewed have state laws that vary in terms of the fees allowed for patient and third-party requests for medical records. For example, three of the states have per-page fee amounts for patient and third-party records requests. The amounts charged are based on the number of pages requested and vary across the three states. One of the three states has established a different per-page fee amount for third-party requests. The other two do not authorize a different fee for patient and third-party requests. One of the three states also specifies a maximum allowable fee if the provider uses an electronic health records system. The other two do not differentiate costs for electronic or paper records. In the fourth state, state law entitles individuals to one free copy of their medical record. The statute allows a charge of up to $1 per page for additional copies. Patient advocates, provider associations, and other stakeholders GAO interviewed identified challenges that patients and providers face when patients request access to their medical records. Patients' challenges include incurring what they believe to be high fees when requesting medical records—for example, when facing severe medical issues that have generated a high number of medical records. Additionally, not all patients are aware that they have a right to challenge providers who deny them access to their medical records. Providers' challenges include the costs of responding to patient requests for records due to the allocation of staff time and other resources. In addition, according to provider associations and others GAO interviewed, fulfilling requests for medical records has become more complex and challenging for providers, in part because providers may store this information in multiple electronic record systems or in a mix of paper and electronic records.
gao_GAO-18-30
gao_GAO-18-30_0
Background FEMA’s Public Assistance Grant Program Major disaster declarations can trigger a variety of federal response and recovery programs for government and nongovernmental entities, households, and individuals. FEMA’s Office of Response and Recovery manages the PA grant program, providing funds to states, territorial governments, local government agencies, Indian tribes, authorized tribal organizations, and certain private nonprofit organizations in response to presidentially declared disaster declarations to repair damaged public infrastructure such as roads, schools, and bridges. Figure 1 shows the total amount of PA funds obligated by county from January 2009 through February 2017 for federal disaster declarations. To implement the PA program, FEMA’s staff includes a mix of temporary, reservist, and permanent employees under two authorities, the Stafford Act and Title 5. Reservists make up the largest share of the PA workforce, which consisted of 1,852 employees––1,041 reservists, 634 full-time equivalents, and 177 temporary Cadre of On-Call Response/Recovery Employees––as of June 2017, according to PA officials. Figure 2 summarizes the key characteristics for each type of employee. After a disaster, FEMA sends PA program staff to the affected area to work with state and local officials to assess the damage prior to a disaster declaration. FEMA officials establish a temporary Joint Field Office (JFO) to house staff who will manage response and recovery functions after a declared disaster (including operations, emergency response and support teams, planning, administration, finance, and logistics). Once the President has declared a disaster, PA staff work with grant applicants to help them document damages, identify eligible costs and work, and prepare requests for PA grant funds by developing project proposals. These proposals may include proposals for hazard mitigation if the hazard mitigation work is related to the repair of damaged facilities, referred to as permanent work projects. Immediate emergency measures, such as debris removal, are not eligible for hazard mitigation. Officials then review and obtain approval of the projects prior to FEMA obligating funds to state grantees. Figure 3 describes the process used to develop, review, and obligate PA projects. Hazard Mitigation in the PA Program In addition to rebuilding and restoring infrastructure to its predisaster state, the PA program can be used to fund hazard mitigation measures that will reduce future risk to the infrastructure in conjunction with the repair of disaster-damaged facilities. There is no preset limit to the amount of PA funds a community may receive; however, PA hazard mitigation measures must be determined to be cost effective. Some examples of hazard mitigation measures that FEMA has predetermined to be cost effective, if they meet certain requirements, include installing shut-off valves on underground pipelines so that damaged sections can be isolated during or following a disaster; securing a roof using straps, clips, or other anchoring systems in locations subject to high winds; and installing shutters on windows or replacing glass with impact-resistant material. Applicants can also propose mitigation measures that are separate from the damaged portions of a facility, such as constructing floodwalls around damaged facilities to avoid future flooding. FEMA evaluates these proposals, considering how the proposed measure protects damaged portions of a facility and whether the measure is reasonable based on the extent of the damage, and determines eligibility on a case-by-case basis. FEMA’s Federal Insurance and Mitigation Administration (FIMA) deploys a cadre of mitigation staff to help coordinate and implement hazard mitigation activities during disaster recovery, including PA hazard mitigation. A primary task of these staff is to identify and assess opportunities to incorporate hazard mitigation into PA projects. Generally, if an applicant seeks to incorporate hazard mitigation measures into a PA project, FIMA’s hazard mitigation staff develop a hazard mitigation proposal. Previous Challenges and Recommendations Related to the PA Program We, the DHS OIG, and others have reported past challenges with FEMA’s management of the PA program related to workforce management, information sharing, and hazard mitigation. For example, we reported in 2008 that the PA program had a shortage of experienced and knowledgeable staff, relied on temporary rotating staff, and provided limited training to their workforce, which impaired PA program delivery and delayed recovery efforts after Hurricanes Katrina and Rita. We found that staff turnover, coupled with information sharing challenges, delayed projects when applicants had to provide the same information each time FEMA assigned new staff and that poorly trained staff provided incomplete and inaccurate information during their initial meetings with applicants or made inaccurate eligibility determinations, which also caused processing delays. We recommended that FEMA strengthen continuity among staff involved in administering the PA program by developing protocols to improve information and document sharing among FEMA staff. In response, in 2013 FEMA instituted a PA Consistency Initiative, which included hiring new managers for FEMA regional offices, stakeholder training on PA program administration, and using a newly developed internal website to allow staff to post and share information to address continuity and knowledge sharing concerns during disaster operations. FEMA also developed the Public Assistance Program Delivery Transition Standard Operating Procedure to facilitate the transfer of responsibility for PA program activities during cases of staff turnover during recovery operations. Despite FEMA’s efforts to implement our recommendations, the DHS-OIG, in 2016, found continuing challenges after Hurricane Sandy with workforce levels, skills, and performance of reservists, who make up the majority of the PA workforce. Regarding information sharing, in 2008, we also identified difficulties sharing documents among federal, state, and local participants in the PA process and difficulties tracking the status of projects. We recommended that FEMA improve information sharing within the PA process by identifying and disseminating practices that facilitate more effective communication among federal, state, and local entities. In response, FEMA proceeded with the implementation of a grant tracking and management system, called EMMIE, which was used previously in 2007. However, in subsequent years we found weaknesses in how FEMA developed the system and the DHS-OIG found that information sharing problems similar to the ones identified in our 2008 report persisted. Regarding hazard mitigation, we reported in 2015 that state and local officials experienced challenges in using PA hazard mitigation during the Hurricane Sandy recovery efforts because PA officials did not consistently prioritize hazard mitigation, and in some cases discouraged mitigation projects during the PA grant application process, among other challenges. We recommended that FEMA assess the challenges state and local officials reported, including the extent to which they can be addressed, and implement corrective actions, as needed. In response to our recommendation, FEMA developed a corrective action plan that included actions and milestones for reviewing, updating, and implementing PA hazard mitigation policy. FEMA also identified the PA new delivery model as a solution for some of the challenges state and local officials reported. Previously, the OIG also reported that PA program officials did not consistently identify eligible PA hazard mitigation projects, and that PA officials did not prioritize the identification of PA hazard mitigation opportunities at the onset of recovery efforts after the 2005 Gulf Coast hurricanes. See appendix I for a summary of findings and the status of our past recommendations on challenges with workforce management, information sharing, and hazard mitigation related to the PA program since our last review in December 2008. FEMA’s own internal reviews and outreach efforts have also identified similar challenges. For example, at FEMA’s request the Homeland Security Studies and Analysis Institute assessed the effectiveness and efficiency of the PA program in 2011. The institute’s report outlined 3 key findings and 23 recommendations relating to the PA preaward process. For example, the report found that FEMA could enhance training programs to develop a skilled and experienced workforce; utilize technology and employ web-based tools to support centralized processing, transparency, and efficient decision making; and identify and address potential special considerations, such as hazard mitigation proposals, as early as possible in the preaward process to improve consistency. In 2014, PA program officials analyzed the PA grant process and used input from agency staff and officials involved in various aspects of the program to identify potential improvements. The resulting Public Assistance Program Realignment report found that challenges in workforce management, information sharing, and hazard mitigation continued, and included recommendations for improvement. For example, the report concluded that a shortage of qualified staff, high turnover, unclear organizational responsibilities, and inconsistent training were long-standing and continuing challenges that impaired the PA pre-award process. In addition, from January 2015 to April 2015, FEMA conducted extensive outreach with more than 260 stakeholders across FEMA headquarters, all 10 regions, 43 states, and 4 tribal nations to discuss challenges in the PA program and opportunities for improvement. For example, stakeholders identified challenges with ineffective information collection during the preaward process and suggested identifying special considerations, such as hazard mitigation, earlier in the PA process as an idea for improvement. In response, FEMA began redesigning the PA preaward process to operationalize the results of its 2014 report and address areas for improvement identified through its outreach efforts. The PA New Delivery Model FEMA awarded a contract for program support to help PA officials implement a redesigned PA program in 2015. This included a new process to develop and review grant applications, and obligate PA funds to states affected by disasters; new positions, such as a new program delivery manager who is the single point of contact throughout the grant application process; a new Consolidated Resource Center (CRC) to support field operations by supplementing project development, validation, and review of proposed PA project applications; and a new information system to maintain and share PA grant application documents. As part of the new process, PA program officials identified the need to ensure that staff emphasize special considerations, such as hazard mitigation, earlier in the process. Taken together, these efforts represent FEMA’s “new delivery model” for awarding PA program grants. Enhancements in the PA program under the new delivery model are presented in figure 4. Regarding the new delivery model process, FEMA introduced several changes to enhance outreach to applicants during the “exploratory call”— the first contact between FEMA and local officials—and during the first in- person meeting, called the “recovery scoping meeting.” FEMA also revised decision points during the process, when program officials can request more information from applicants, and applicants can review and approve the completion of project development steps. FEMA also incorporated special considerations, such as hazard mitigation, earlier in the new process during the exploratory calls and recovery scoping meetings. The changes and enhancements to the PA grant award process in the new delivery model are presented in figure 5. The new process divides proposed PA projects based on complexity and type of work into three categories—100 percent completed, standard, and specialized—that PA staff manage to expedite review or assign skilled staff to technical projects as needed. If the applicant has already completed work following a disaster, such as debris removal, it is considered “100 percent completed” and JFO staff collect the necessary documents and provide the information to the CRC staff who complete the development of project applications, validate the information, and complete all necessary reviews. Projects that require repairs and further assistance from PA program staff at the JFO include “standard” and “specialized” projects, which include a site inspection to document damages, before the JFO staff provide the information to the CRC. Further, PA program officials assign PA staff based on their skills and experience to standard projects, which are less technically complex to develop, and specialized projects, which are more technically complex and costly. We discuss the new workforce positions FEMA developed for JFOs and CRCs, the new information system FEMA developed to maintain and share PA grant documents with applicants, and FEMA’s efforts to incorporate hazard mitigation into PA projects later in this report. Testing the New Delivery Model Prior to Full Implementation Since 2015, FEMA has invested almost $9 million to redesign the PA program through the reengineering and implementation of the new delivery model, including about $4.7 million for contract support for implementation, and $4 million for acquisition of the new information system. FEMA tested the new delivery model in a series of selected disasters, using a continuous process improvement approach to assess and improve the process, workforce changes, and information system requirements, prior to implementing the new model for all future disasters. For example, FEMA first tested the new process in Iowa in July 2015 and, in February 2016, PA program officials expanded their test to include all of the new staff positions. In October 2016, PA program officials added the new information system to achieve a comprehensive implementation of all of the elements of the new delivery model for the agency’s response to Hurricane Matthew in Georgia, two additional disasters in Georgia in January 2017, and in Missouri, North Dakota, Wyoming, Vermont, and two disasters in New Hampshire from June through August 2017. The timeline for PA’s implementation of the new delivery model is shown in figure 6. According to program officials, FEMA planned to implement the new model for all future disasters beginning in January 2018. However, historic disaster activity during the 2017 hurricane season accelerated full implementation. As a result, on September 12, 2017, FEMA officials announced that, unless officials determined it would be infeasible in an individual disaster, the program would use the new delivery model in all future disasters. FEMA Designed the New PA Delivery Model to Address Workforce Management Challenges, but Efforts to Support Full Implementation Could Be Enhanced PA’s New Delivery Model Was Designed to Respond to Previously Identified Workforce Challenges According to FEMA’s 2014 PA Program Realignment report and other program documents, PA officials designed the new delivery model to respond to persistent workforce management challenges related to identifying the required number of staff and needed skills and training, among other things, to improve the efficiency and effectiveness of the PA preaward process. To address these challenges, PA program officials centralized much of the responsibility for processing PA projects in the CRCs, created additional new positions with specialized roles and responsibilities in JFOs, and established training and mentoring programs to help build the new staffs’ skills. Centralized Roles at CRCs In 2016, PA program officials centralized some of the project activities that otherwise were being carried out at individual JFOs at FEMA’s first new CRC in Denton, Texas. Officials did so by establishing 18 new positions, many of which directly correlated with positions that FEMA deployed to individual JFOs in the legacy PA delivery model. According to PA officials, centralizing positions will improve standardization in project processing, and result in a higher quality work product. As part of the new delivery model, PA program officials created a new hazard mitigation liaison position for PA program staff at the CRC that did not previously exist at individual JFOs. The other new positions that PA program officials either created or centralized at the CRC included two specialized positions responsible for costing and validating PA projects. Previously, the PA project specialist deployed to the JFO would complete these tasks and others; however, the consistency of project development varied across the regions and disasters. In contrast, CRC staff are full-time employees who receive training to specialize in completing standardized project development steps for PA projects from multiple disasters on an ongoing basis. Program officials anticipate that centralizing new specialized staff at the CRCs will also reduce PA administrative costs and staffing levels at the JFOs. For example, staff at the CRCs, such as the new hazard mitigation liaisons and insurance and costing specialists, could support project development for multiple disasters and regions simultaneously, whereas PA previously needed to deploy staff to each JFO to fulfill these roles. In addition, once JFOs operating under the new model send projects to the CRCs for processing and review, FEMA can more rapidly close its JFOs, reducing associated administrative costs. For example, following Hurricane Matthew, FEMA credited the new delivery model, in part, with its ability to close the JFO in Georgia sooner than several other JFOs in neighboring states not involved in the implementation of the new delivery model. Specialized Roles at JFOs PA program officials created new positions with more specialized roles and responsibilities to help PA staff at JFOs provide more consistency in the project development process and guidance to applicants. Program officials split the broad responsibilities previously managed at the JFOs by PA crew leaders and project specialists, into two new, specialized positions—the program delivery manager and site inspector. The program delivery manager serves as the applicant’s single point-of-contact throughout the preaward process, manages communication with the applicant, and oversees document collection. All three PA grant applicants we spoke to following Hurricane Matthew in Georgia greatly appreciated the knowledge and assistance provided by their program delivery managers. Site inspectors are responsible for conducting the site inspection to document all disaster-related damages; determining the applicant’s plans for recovery, coordinating with other specialists, and verifying the information collected with the applicant. Officials expect deployed staff at JFOs can complete the fieldwork faster and provide greater continuity of service to applicants. Further, program officials believe that specializing roles will enable them to provide more targeted training, and improve employee satisfaction. New Training Courses and Mentoring Site inspection, hazard mitigation, and environmental and historic preservation specialists, along with a new Public Assistance program mentor, conduct a site inspection with the applicant to document damages to a historic cemetery in Savannah, Georgia, following Hurricane Matthew in 2016. PA program officials designed new training and mentoring programs for the new positions at the CRCs and JFOs and used a continuous feedback process to update and improve the training, position guides, and task books throughout the implementation of the new delivery model, according to PA officials. According to a June 2017 update of the PA Cadre Training Plan, training for the new model has five major focuses: required training and skills for position qualification; on-site refresher training; mentor training; regional-based state, local, tribal, and territorial training; and training on the new information system. Specifically, officials developed six new training courses, and identified which are required for each position under the new delivery model. For example, a program delivery manager at the JFO is required to complete both the program delivery manager and site inspector specialist courses. As of June 2017, PA program officials had provided at least one new model training course to 93 percent of their cadre (including program delivery manager training to 366 individuals and site inspector training to 1,172 individuals) and planned to provide 28 additional courses through September 2017 to the PA cadre. According to regional and CRC officials, the training courses and mentoring from experienced staff helped maximize new staff’s capabilities in the new process. PA Officials Planned Additional Training to Address Issues Identified during Implementation Throughout the third implementation of the new delivery model, JFO and CRC staff, as well as regional PA staff, stakeholders, and applicants, identified staff skills and training as a key area that needed more attention for full implementation of the new delivery model. Our work and FEMA’s after-action reports from the third test in Georgia identified problems with site inspector skills, which affected the timeliness and accuracy of projects. Specialists and managers at the CRC noted that poorly trained site inspectors did not consistently provide the necessary information from the field, which resulted in delays for the CRC staff to process projects, and after-action reports also identified challenges with site inspector skills. According to a PA applicant in Georgia, the inconsistency of skills and experience of their site inspector resulted in the need to conduct a “do-over” site inspection on one of the applicant’s projects, causing delays. PA staff and state officials attribute much of the site inspectors’ skill gaps to their lack of training and experience in the program. According to PA Region officials, providing timely training will be a resource-intensive challenge for implementing the new delivery model for all future disasters. For example, it can be difficult to train reservists before FEMA deploys them to disasters, and many of the program’s experienced reservists have retired or resigned, resulting in few mentors for the program and a high need to provide training to inexperienced and newly hired staff. PA officials and stakeholders also emphasized the need for FEMA to provide additional training for state and local officials to build capacity and support the goals of the new delivery model. For example, according to JFO officials at the third implementation, the new delivery model increases responsibilities for applicants, who will require more applicant training than FEMA currently provides. According to state officials, applicant capabilities vary, and FEMA should provide training to state and local officials on the new delivery model and the information system before a disaster. Skill gaps among applicants could result in inconsistent implementation of the new process, according to PA staff and stakeholders, and PA staff said that training was important to prevent applicants from reverting back to the legacy PA grant application process. To support full implementation of the new delivery model for all disasters, PA program officials have updated training courses for PA staff and applicants, and planned additional training to address these challenges and other lessons learned through the test implementation. For example, PA officials told us they updated the site inspector training program in May 2017 and scheduled a new site inspector training session in August 2017 to include more hands-on training to help address the skill gaps identified for site inspectors. PA officials created a new training course for FEMA’s regional offices, in part to enable regional PA staff to provide new delivery model training to state and local officials. PA officials also planned to develop a self-paced, online course for state and local officials by the end of 2017. Opportunities Exist to Enhance Workforce Assessment in the New Delivery Model PA officials have not fully assessed the workforce needed for JFO field operations, CRC staff, or FIMA’s hazard mitigation staff to support implementation of the new delivery model for all future disasters. PA program officials developed an initial assessment of the total number of staff needed in the field and the CRCs in 2016 to estimate cost savings associated with consolidating and specializing positions at the CRCs and deploying fewer staff to JFOs. However, the assessment did not identify the number of staff required to fill specific positions, including program delivery managers and hazard mitigation specialists, needed to support the new delivery model for full implementation. In reviewing the test implementations of the new delivery model, we found that inadequate staffing levels at the JFOs and CRCs, and with FIMA’s hazard mitigation staff, affected staffs’ ability to achieve the goals of the new delivery model. Staff levels at the JFO. We identified challenges with having the right number of program delivery managers and site inspection specialists to achieve program goals for customer satisfaction, efficiency, and quality in test implementations of the new delivery model. For example, in the second test implementation of the new delivery model in Oregon in 2016, PA did not deploy enough program delivery managers to the disaster, which resulted in unmanageable caseloads for program delivery managers, according to state and PA officials. PA program officials assigned program delivery managers an average caseload of 12 PA applicants, which was more than they could effectively manage, according to PA staff, and program officials aim for a caseload of 8 to 10 applicants. According to state officials, local officials reported they did not always receive the support they needed from program delivery managers who managed caseloads consisting of dozens of projects at multiple sites for each applicant during the Oregon implementation. As a result of overwhelmed program delivery managers, local officials faced challenges understanding their responsibilities, such as recognizing when they needed to provide information for the project development to proceed, according to state officials. PA staff involved with the third test implementation in Georgia in 2016 and 2017 said there were not enough site inspectors or program delivery managers to fully manage the workload at the JFO. Because of the specialization of roles, projects could not move forward when there were not enough staff to execute the next step in the process. For example, PA staff at the JFO said program delivery managers completed recovery scoping meetings rapidly, but faced a bottleneck in scheduling site inspections because there were more applicants awaiting site inspections than could be fulfilled by the number of site inspection specialists available. Staff levels at the CRC. Staff at the CRC reported challenges with staffing levels during the Oregon and Georgia test implementations, and expressed concerns about when PA officials will staff the CRCs to support full implementation of the new model for all disasters. During the Oregon test implementation, a CRC specialist said there were not enough technical specialists to manage the workload and, as a result, PA program officials had to redeploy site inspectors from their JFO field operations to the CRC to complete costing estimates. During the third test in Georgia, quality assurance specialists said that their workload resulted in added stress trying to complete the work in time while adhering to quality standards. According to CRC specialists in Denton, Texas, PA officials had not determined required staff levels for full implementation, but agreed that workload was too high and program officials needed to determine the appropriate staff levels for each CRC to support full implementation. PA officials were still evaluating CRC processing times and workload management from the Oregon and Georgia test implementations to determine staffing needs, according to PA officials. Further, PA program officials plan to establish a second CRC in Winchester, Virginia, before the end of 2017, but have not determined the number of additional permanent full-time staff needed to support the CRCs for full implementation of the new delivery model. Staff levels for the hazard mitigation specialists. PA officials have not identified the number of hazard mitigation specialists in FIMA’s hazard mitigation cadre needed for full implementation of the new delivery model. According to JFO staff, current hazard mitigation staff levels are insufficient to provide the desired in-person participation of hazard mitigation staff on all recovery scoping meetings to share information on hazard mitigation with applicants and help them identify potential mitigation opportunities. A PA program official said officials missed opportunities to pursue hazard mitigation during the test implementation after Hurricane Matthew in Georgia due to lack of hazard mitigation specialists. In addition, for the test implementation in Oregon, there were not enough hazard mitigation specialists to cover all site inspections and implement their new delivery model responsibilities, according to FEMA’s after-action reports. The absence of hazard mitigation specialists in the early stages of PA project development may cause delays in officials’ identifying hazard mitigation opportunities, according to a FIMA official. PA program officials said they did not work with FIMA to determine the appropriate levels of hazard mitigation staff under the new delivery model because they were refining the new process, but as of June 2017 were working with FIMA to do so. One of the key implementation activities in our Business Process Reengineering Assessment Guide includes addressing workforce management issues. Specifically, this includes identifying how many and which employees will be affected by the position changes and retraining. Further, our prior work has found that high-performing organizations identify their current and future workforce needs—including the appropriate number and deployment of staff across the organization— and address workforce gaps, to improve the contribution of critical skills and competencies needed for mission success. According to a PA program official, their initial workforce assessment was not comprehensive because they were still collecting data required to make informed decisions. PA officials agreed that updating their workforce assessments prior to full implementation could be helpful, and acknowledged that program officials needed to be more proactive applying the lessons learned as they pivot from testing to full implementation of the new delivery model in 2018. FEMA also conducts a standard agency wide workforce structure review every 2 to 3 years, which helps officials determine the appropriate disaster workforce levels. As of June 2017, PA officials were working with other offices within FEMA to expedite the agency-wide assessment of the PA and FIMA hazard mitigation cadres, but did not know when they would complete the assessment. PA officials also acknowledged that they faced an aggressive schedule to complete various planned activities for workforce management, training, and other efforts, in support of full implementation, and that they may not be able to complete all efforts as thoroughly as they would like in order to expedite the transition of the PA program to the new delivery model. The gaps in PA workforce assessment in the JFOs, CRCs, and for FIMA’s hazard mitigation cadre present a risk that PA program managers will not have a sufficient workforce to support the goals of the new delivery model. In addition, the timing and implementation of the hiring and training activities for new PA program staff could take multiple months, and program officials will need to know what staff levels are necessary for full implementation of the new delivery model to inform resource decisions for the program in coordination with other agency offices. According to PA program officials, workforce assessment efforts have been delayed as a result of disaster response and recovery efforts related to Hurricanes Harvey, Irma, and Maria. Completing a workforce assessment will help program officials identify gaps in their workforce and skills, which could help PA program officials minimize the effects of long- standing workforce staffing and training challenges on the PA program delivery and inform full implementation for all disasters. FEMA Designed the New PA Information System to Resolve Past Challenges, but Opportunities Exist to Fully Implement Key Management Controls FEMA’s New PA Information System Is Designed to Resolve Long-Standing Information Sharing Challenges costs. For example, EMMIE does not collect information on all of the preaward activities that are part of the PA grant application process. As a result, PA program officials said they, and applicants, must use ad hoc reports and personal tracking documents to manage and monitor the progress of grant applications. PA officials added that EMMIE is not user- friendly and applicants often struggle to access the system. In response to these ongoing challenges, PA program officials developed FAC-Trax— a separate information system from EMMIE—with new capabilities designed to improve transparency, efficiency, and management of the PA program. Specifically, FAC-Trax allows FEMA staff (PA Grants Manager) and applicants (PA Grants Portal), to review, manage, and track current PA project status and documentation. For example, applicants can use FAC- Trax to submit requests for public assistance, upload required project documentation, approve grant application items, and send and receive notifications on grant progress and activities. In addition, the FAC-Trax system includes standardized forms, as well as required fields and tasks that PA program staff and applicants must complete before moving on to the next steps in the PA preaward process. According to PA officials, these capabilities increase transparency, encourage greater applicant involvement, and enhance collaboration and communication between FEMA and grant applicants, to improve efficiency in processing and awarding grant applications and enhance the quality of project development. Further, PA officials said that FAC-Trax could reduce challenges associated with staff turnover during the project development process because the system stores and maintains applicant information and project documentation, making it easier for transitioning staff to assist an applicant. They also said they use FAC-Trax to gather and analyze data that supports management of the PA process, including measuring the timeliness of the grant application process. For example, during the test implementation of the new delivery model in Georgia following Hurricane Matthew, officials were able to document that, on average, program delivery managers took 5 days to conduct the exploratory call and 14 days to hold the recovery scoping meeting with applicants, and CRC officials took 33 days to develop and review grant proposals. Managers use this data to assess staffing needs and identify bottlenecks in the PA process, according to PA officials. Opportunities Exist to More Fully Implement Two of Four Key IT Management Controls for FEMA’s New PA Information System FAC-Trax is critical to the new PA delivery model and will be a primary means of sharing grant application documents, tracking ongoing PA projects, and ensuring that FEMA staff and applicants follow PA grant policies and procedures. Given the importance of developing and testing this new information sharing system, we evaluated its development against four key IT management controls—(1) project planning; (2) risk management; (3) requirements development; and (4) systems testing and integration. When implemented effectively, these controls provide assurance that IT systems will be delivered within cost and schedule and meet the capabilities needed by its users. We found that FEMA’s development of FAC-Trax fully satisfied best practices for project planning and risk management, but additional steps are needed to fully satisfy the areas of requirements development and systems testing and integration, as discussed below. See appendix II for the full assessment of each IT management control. Project Planning PA program officials fully satisfied all five practices in the project planning control area, according to our assessment. Key project planning practices are (1) establishing and maintaining the program’s acquisition strategy, (2) developing and maintaining the overall project plan and obtaining commitment from relevant stakeholders, (3) developing and maintaining the program’s cost estimate, (4) establishing and maintaining the program’s schedule estimate, and (5) identifying the necessary knowledge and skills needed to carry out the program. To address the first and second practices, program officials established detailed plans that describe the acquisition strategy and objectives, the program’s scope, and its framework for using an Agile software development approach, among other key actions. Agile is a method of software development that utilizes an iterative process and constantly improves software based on user needs and feedback. Program officials also developed a plan detailing the program’s approach to deploy and maintain FAC-Trax and established stakeholder groups and an integrated product team to support and oversee the development of FAC-Trax. To address the third and fourth practices, they developed and maintained a master schedule of all implementation tasks and milestones through project completion, and developed a life-cycle cost estimate of over $19 million. Additionally, FAC-Trax’s acquisition performance baseline describes the system’s minimum acceptable and desired baselines for performance, schedule, and cost. Lastly, in regards to the fifth practice, program officials identified the knowledge and skills needed to carry out the program in the FAC-Trax Request for Proposal and FAC-Trax Capability Development Plan. Risk Management PA program officials fully satisfied all four practices in the risk management control area, according to our assessment. Key risk management practices are (1) identifying risks, threats, and vulnerabilities that could negatively affect work efforts, (2) evaluating and categorizing each identified risk using defined risk categories and parameters, (3) developing risk mitigation plans for selected risks, and (4) monitoring the status of each risk periodically and implementing the risk mitigation plan as appropriate. To address the first and second practices, program officials identified key risks that could negatively affect FAC-Trax in a “risk register”—an online site used to track risks, issues, and mitigating actions. As of May 2017, officials had identified 13 risks in the risk register—four open and nine closed—and evaluated and categorized the identified risks based on the probability of occurrence and scope, schedule, and cost impacts. For example, program officials reported that two of its open risks have a “medium” risk rating—meaning the risk has the potential to slightly affect project cost, schedule, or performance. To address the third and fourth practices, program officials developed and documented risk mitigation plans for all identified risks. For example, program officials planned to mitigate the risk of limited engagement of subject matter experts by identifying and engaging with appropriate experts through workshops, and monitoring the capability development process to identify any issues that may cause project delays. In addition, PA program officials documented the responsible officials, reevaluation date, and risk status, among other things, for each risk in the register, and reviewed and updated risks during weekly and monthly program reviews with stakeholders throughout FEMA. Requirements Development PA program officials fully satisfied four out of five practices in the requirements development control area, according to our assessment. Key requirements development practices are (1) eliciting stakeholder needs, expectations, and constraints, and transforming them into prioritized customer requirements; (2) developing and reviewing operational concepts and scenarios to refine and discover requirements; (3) analyzing requirements to ensure that they are complete, feasible, and verifiable; (4) analyzing requirements to balance stakeholder needs and constraints; and (5) testing and validating the system as it is being developed. To address the first and second practices, program officials developed a requirements management plan outlining how officials capture, assess, and plan for FAC-Trax enhancements, and established a change control process to review, prioritize, and verify user requests for changes to the system and feedback. As of May 2017, the PA program office received 734 change requests related to FAC-Trax, of which program officials completed 420 changes and planned to address an additional 277 entries. Program officials also developed a functional requirements document outlining the high-level requirements for FAC- Trax and detailed operational concepts and scenarios for each phase of the preaward process in the system’s concept of operations. To address the fourth practice, program officials created a standard template to analyze and document the user needs and acceptance criteria for planned system capabilities in March 2017. In addition, PA program officials identified risks and dependencies for recommended changes to FAC-Trax, and balanced the cost and priority of system enhancements as part of the change control process. Lastly, regarding the fifth practice, program officials tested and evaluated FAC-Trax during development, which included validating system enhancements through user acceptance testing. However, program officials did not fully address the third practice— analyzing requirements to ensure they are complete, feasible, and verifiable—because they did not ensure detailed user requirements were necessary and sufficient by tracking them back to higher-level requirements. For example, although program officials reviewed change requests for completeness and followed up with users to verify requirements, officials did not track system enhancements, made in response to detailed user requirements (e.g., allowing users to search PA projects by project number), back to the high-level requirements (e.g., storing data and information provided by the applicant) identified in the FAC-Trax functional requirements document and performance work statement. Officials did not track system enhancements back to high-level requirements because they did not have a complete understanding of basic user needs and system requirements at the beginning of the FAC- Trax effort, according to the PA program manager. A PA official also said the change control process was a way to identify the basic capabilities FAC-Trax needed to have and that tracking enhancements back to high- level requirements could have made the change control process more difficult to manage, and reduced user participation if, for example, users needed to understand how their change requests related to high-level requirements. However, program officials could have tracked enhancements back to high-level requirements themselves using the change control process without putting any additional burden on users. Despite not having a complete understanding of user needs and system requirements at the beginning of the FAC-Trax effort, analyzing whether users’ change requests satisfy higher-level requirements identified in key design and planning documents would have provided officials with a basis for more detailed and precise requirements throughout project development and helped them better manage the project, according to IT management controls. Further, according to the PMBOK® Guide, tracking or measuring system capabilities against approved requirements is a key process for managing a project’s scope, measuring project completion, and ensuring the project meets user needs and expectations. Program officials acknowledged the importance of tracking system enhancements back to documented system requirements. Ensuring that FAC-Trax meets user needs and expectations is especially important because the information system is key to the success of the new delivery model, according to PA officials. By analyzing progress made on documented, high-level requirements, a step that reflects a key IT management control for requirements development, the PA program will have greater assurance that FAC-Trax will provide functionality that meets user needs and expectations. Systems Testing and Integration PA program officials did not fully satisfy either of the two practices in the systems testing and integration control area, according to our assessment. Key systems testing and integration practices are (1) developing test plans and test cases, which include a description of the overall approach for system testing, the set of tasks necessary to prepare for and perform testing, the roles and responsibilities for individuals or groups responsible for testing, and criteria to determine whether the system has passed or failed testing; and (2) developing a systems integration plan to identify all systems to be integrated, describe how integration problems are to be documented and resolved, define roles and responsibilities of all relevant participants, and establish a sequence and schedule for every integration step. In regards to the first practice, PA program officials and the FAC-Trax contractor established a test plan that identifies the method and strategy to perform testing, including the necessary tasks, such as responding to user feedback and testing errors, and incorporating necessary resolutions into future work, testing parameters, and the roles and responsibilities of the individuals responsible for testing. However, program officials have not developed system testing criteria to evaluate FAC-Trax, which would align with the practice described above of using criteria to determine whether the system has passed or failed testing. A key feature of Agile software development is the “definition of done”—a set of clear, comprehensive, and objective criteria, that the government should use to evaluate software after each iteration of development. PA program officials said they did not establish a definition of done because officials initially managing the FAC-Trax effort lacked familiarity with system development in the Agile environment. Officials acknowledged the importance of establishing a definition of done and said they are planning to develop one, but have not identified how or when to incorporate it into the development process. According to the TechFAR—the government’s handbook for procuring digital services using Agile processes—the government and vendor should establish this definition after contract award at the beginning of each cycle of software development. By establishing criteria, such as a definition of done, to evaluate the system—a step that reflects a key IT management control for system testing and is an effective practice for applying Agile to software development—the PA program will have greater assurance that FAC- Trax is usable and responsive to specified requirements. In regards to the second practice, PA program officials developed a systems integration plan in June 2017 that identified the potential for integration of FAC-Trax with four FEMA systems, including EMMIE. In addition, program officials included a description of how staff should document integration problems and the resolution of problems in FAC- Trax development and test plans. However, the systems integration plan does not define roles and responsibilities of all participants for system integration activities or establish a sequence and schedule for every integration step for the four FEMA systems. PA officials said that system integration planning for FAC-Trax is in the early stages, but acknowledged the importance of these elements of system integration planning. Officials plan to define roles and responsibilities of all participants for system integration activities and develop the sequence and schedule for every integration step as they add new systems to the FAC-Trax development plan and obtain funding needed for their integration. Nonetheless, FEMA has used FAC-Trax for selected PA disasters since October 2016 and plans to use FAC-Trax for all future disasters. According to IT management controls, agencies should establish the systems integration plan early in the project and revise it to reflect evolving and emerging user needs. By ensuring that the FAC- Trax systems integration plan defines the roles and responsibilities of relevant participants for all integration relationships and establishes a sequence and schedule for every integration step, the PA program will have greater assurance that FAC-Trax functions properly with other systems and meets user needs. FEMA’s New PA Model Enhances Hazard Mitigation Staff Participation, but Opportunities Exist to Further Promote Mitigation Changes under the New Model Include Enhanced Participation of Hazard Mitigation Staff FEMA’s new delivery model enhances participation of hazard mitigation staff with the goal of identifying opportunities for mitigation earlier in the PA preaward process, according to PA officials. Two key changes related to hazard mitigation under the new model include (1) an emphasis on engaging with hazard mitigation specialists at the JFO earlier in the PA process and involving them in specific PA preaward activities and (2) the establishment of the PA program’s hazard mitigation liaison at the CRC. For example, position guides direct program delivery managers to coordinate with FIMA’s hazard mitigation specialists prior to recovery scoping meetings, and site inspectors to coordinate with hazard mitigation specialists prior to site inspections to discuss a PA grant applicant’s damages and any potential mitigation opportunities. PA program officials also developed guidance for conducting the exploratory call and the recovery scoping meeting with applicants, which include questions for PA staff to ask on the applicant’s interest in or plans for incorporating hazard mitigation into potential projects. In addition, a new hazard mitigation liaison at the CRC is responsible for reviewing PA projects for hazard mitigation opportunities and serving as a mitigation subject matter expert for the PA program. According to data provided by FEMA, PA grant applicants incorporated hazard mitigation into approximately 18 percent of permanent work projects for all disasters nationwide from 2012 to 2015. During test implementation of the new delivery model, state, PA, and FIMA officials all reported an increase in the number of hazard mitigation activities on PA permanent work projects. For example, state officials who participated in the second new model test in Oregon said that effective communication and coordination between PA and hazard mitigation staff resulted in applicants incorporating hazard mitigation into over 60 percent of permanent work projects. Furthermore, PA officials reported an increase in hazard mitigation during the third test implementation of the new model in Georgia following Hurricane Matthew, where approximately 16 percent of permanent work projects included mitigation, as of June 2017. This represents an increase compared to the PA program’s estimate for the proportion of projects that incorporate hazard mitigation among previous PA hurricane disasters in Georgia, which was about 3 percent, according to PA officials. While PA officials are trying to increase hazard mitigation through the new delivery model, not all disasters present the same number of opportunities to incorporate hazard mitigation. First, the PA program only incorporates hazard mitigation measures for permanent work projects, such as repairs to roads, bridges, and buildings. For example, as of June 2017, approximately 60 percent of the projects FEMA funded in Georgia for the third test implementation after Hurricane Matthew were for emergency work, which is not eligible for hazard mitigation measures. Second, the PA program only funds mitigation measures that officials determine to be cost-effective. In addition, we have previously reported on other factors that affect whether applicants incorporate hazard mitigation into PA projects, such as their capacity to manage and ability to fund hazard mitigation projects. Opportunities to Better Promote Hazard Mitigation under the New Model Hazard Mitigation Planning and Prioritization National Planning for Hazard Mitigation In our 2015 report on disaster resilience following Hurricane Sandy, we noted that disaster affected areas have different threats and vulnerabilities, and local stakeholders make the ultimate determination whether or not to incorporate hazard mitigation into a project. Further, without a strategic approach to making disaster resilience investments, the federal government and its nonfederal partners may be unable to fully capitalize on opportunities for mitigation on the greatest known threats and hazards. We recommended that the Mitigation Framework Leadership Group develop an investment strategy to help ensure that federal funds expended to enhance disaster resilience achieve the goal of reducing the nation’s fiscal exposure because of climate change and the rise in the number of federal major disaster declarations as effectively and efficiently as possible. In response, the Federal Emergency Management Agency (FEMA) plans to issue a final National Mitigation Investment Strategy in 2018. The goals of this strategy include increasing the effectiveness of investments in reducing disaster losses and increasing resilience, and improving coordination of disaster risk management among federal, state, local, tribal, territorial, and private entities. Although the new model establishes hazard mitigation activities for PA and FIMA staff in the preaward process, it does not standardize and prioritize hazard mitigation planning at JFOs in the way FEMA has done with prior PA program policy. Specifically, FEMA’s 2007 PA program policy standardized planning for hazard mitigation across PA recovery efforts by stating that agency and state officials should issue a memorandum of understanding (MOU) early in the disaster, outlining how PA hazard mitigation will be addressed for the disaster, including what mitigation measures will be emphasized, and identifying applicable codes and standards, and any potential integration with other mitigation grant programs. However, PA program officials did not include guidance that standardizes planning for hazard mitigation, such as encouraging the use of an MOU, in FEMA’s 2010 PA program policy, the most recent update to the Public Assistance Program and Policy Guide in April 2017, or the New Delivery Model Operations Manual. As a result, FIMA officials said FEMA and state officials do not consistently issue MOUs that outline how FEMA and the state plan to promote PA hazard mitigation during the recovery effort, explaining that the use of the MOU is based on the preferences and priorities of the FEMA officials involved. When not issuing an MOU, FIMA hazard mitigation staff and PA officials at the JFO meet to determine the extent which hazard mitigation staff interact directly with applicants regarding PA hazard mitigation during the recovery process, according to a FIMA official. Having a consistent approach to planning for and prioritizing hazard mitigation across all disasters is important for FEMA, given that FEMA experienced challenges consistently prioritizing and integrating hazard mitigation across PA recovery efforts, according to GAO and others. For example, in our 2015 report on resilience in the Hurricane Sandy recovery, we found that state and local officials experienced challenges maximizing disaster resilience in the recovery effort because PA officials did not consistently prioritize hazard mitigation during project development. According to FEMA’s National Mitigation Framework, planning is vital for mitigation efforts during disaster recovery, and federal, state, and local officials should establish procedures that emphasize a coordinated delivery of mitigation activities and capitalize on opportunities to reduce future disaster losses. Similarly, the Recovery Federal Interagency Operational Plan, which supports FEMA’s National Disaster Recovery Framework, identifies planning as a key task for identifying mitigation opportunities and integrating risk reduction considerations into decisions and investments during the recovery process. FIMA officials agreed that including the development of a formal plan, such as the historical 2007 PA program policy regarding the use of MOUs, for PA hazard mitigation in operations guidance would help program officials plan for and prioritize hazard mitigation. They noted that FIMA’s hazard mitigation field operations guide includes procedures for implementing proposed MOUs to achieve mitigation goals. PA program officials said that, in light of changes to the PA process under the new model and subsequent updates to program policies, the MOU policy from the 2007 PA program policy was outdated. But officials agreed that planning for and prioritizing hazard mitigation at the operational level is important and said they were examining additional ways to incorporate these activities early in the PA process. As FEMA continues to implement the new model, establishing procedures to standardize hazard mitigation planning for each disaster, as it did through prior policy, could improve the prioritization of hazard mitigation in PA recovery efforts and increase the effectiveness of mitigation for reducing disaster losses and increasing resilience. New Delivery Model Performance Objectives and Measures Could Better Align with FEMA’s Strategic Goal for Hazard Mitigation PA program officials developed performance objectives and measures for hazard mitigation in the new delivery model, but could add measures to better align performance assessment for the PA program with FEMA’s broader strategic goals for hazard mitigation. In its strategic plan for 2014–2018, FEMA established an agency-wide goal to increase the percentage of FEMA-funded disaster projects, such as those under the PA program, that provide mitigation above local, state, and federal building code requirements by 5 percentage points by the end of fiscal year 2018. For example, local building codes may require measures for new construction to mitigate against future damage. To align with FEMA’s strategic goal, PA officials would also need to measure the number of PA projects that included mitigation measures that bring any repaired infrastructure to a level above applicable building codes. However, under the new model, FEMA officials developed performance objectives (and associated measures) to increase the number of projects that include hazard mitigation by 5 percent, and increase the total dollars spent on hazard mitigation by 2 percent. While these measures could help to incentivize mitigation, they are not tied to building codes and do not include specific information that FEMA could use to continually assess the PA program’s contributions to meeting the agency’s strategic goal. According to Standards for Internal Control in the Federal Government, agency management should design control activities, such as establishing and reviewing performance measures, to achieve the agency’s objectives. In addition, our work on leading public sector organizations has found that such organizations assess the extent to which their programs and activities contribute to meeting their mission and desired outcomes, and strive to establish clear hierarchies of performance goals and measures. A clear connection between performance measures and program offices helps to both reinforce accountability and ensure that, in their day-to-day activities, managers keep in mind the outcomes their organization is striving to achieve. FEMA’s ability to evaluate and report on PA hazard mitigation data is constrained, but officials are addressing this challenge through the development of data reporting and analytics capabilities for the PA program’s new information system, according to PA officials. PA program officials developed measures they could use to evaluate the new model during test implementation and compare new model performance to the legacy PA process, and agreed that aligning PA program hazard mitigation goals with FEMA’s agency-wide strategic goals would be helpful. As FEMA continues to develop and implement the new model, developing performance measures and objectives to better inform and support the agency’s broader strategic goals could help to ensure that FEMA capitalizes on hazard mitigation opportunities in PA recovery efforts. Conclusions FEMA’s Public Assistance grant program is a complicated, multi-billion dollar program that is critical to helping state and local communities rebuild and recover after a major disaster. In recent years, FEMA has undertaken a major reengineering effort to make the PA preaward process simpler and more efficient for applicants and to address challenges encountered during recovery from past disasters. FEMA’s new delivery model represents a significant opportunity to strengthen the PA program and address these past challenges, and growing pains are to be expected when implementing any large reengineering effort. Further, FEMA officials work to implement these changes while supporting response and recovery following disasters, including the catastrophic flooding from Hurricane Harvey in August 2017 and widespread damages from Hurricanes Irma and Maria in September 2017. As such, it is critical that feedback obtained and lessons learned while testing the new model inform decisions and actions as FEMA proceeds with full implementation for all disasters, including the complex recovery efforts in the states and territories affected by Hurricanes Harvey, Irma, and Maria. FEMA has redesigned the PA delivery model to address various challenges related to workforce management, information sharing with state and local grantees, and incorporating hazard mitigation into PA projects. FEMA has developed new workforce processes, training, and positions to address past challenges, but completing a workforce assessment that identifies the number of staff needed will inform workforce management and resource allocation decisions to help FEMA ensure a more successful implementation. This is particularly important as FEMA is using the new model for the long-term recovery from the 2017 hurricanes, and FEMA faces capacity challenges as its workforce is stretched thin. Further, FEMA’s new FAC-Trax information sharing system provides FEMA and state and local applicants and grantees with better capabilities to address past challenges in managing and tracking PA projects. In developing FAC-Trax, FEMA implemented many of the key IT management controls that can help ensure that new IT systems are implemented effectively. However, additional steps are needed to fully satisfy the areas of requirements development and systems testing and integration. Finally, FEMA has taken some actions to better promote hazard mitigation as part of its new PA model. However, more consistent planning for hazard mitigation following a PA disaster and developing specific performance measures and objectives that better align with and support the agency’s broader strategic goals related to hazard mitigation could help to ensure that mitigation is incorporated into recovery efforts, which presents an opportunity to encourage disaster resilience and reduce federal fiscal exposure from recurring catastrophic natural disasters. Recommendations for Executive Action We are making the following five recommendations to FEMA’s Assistant Administrator for Recovery: The FEMA Assistant Administrator for Recovery should complete a workforce staffing assessment that identifies the appropriate number of staff needed at joint field offices, Consolidated Resource Centers, and in FIMA’s hazard mitigation cadre to implement the new delivery model nationwide. (Recommendation 1) The FEMA Assistant Administrator for Recovery should establish controls for tracking FAC-Trax capabilities to the system’s functional and operational requirements to more fully satisfy requirements development controls and ensure that the new information system provides capabilities that meets users’ needs and expectations. (Recommendation 2) The FEMA Assistant Administrator for Recovery should establish system testing criteria, such as a “definition of done,” to assess FAC- Trax as it is developed; define the roles and responsibilities of all participants; and develop the sequence and schedule for integration of other systems with FAC-Trax to more fully satisfy systems testing and integration controls. (Recommendation 3) The FEMA Assistant Administrator for Recovery, in coordination with the Associate Administrator of the Federal Insurance and Mitigation Administration, should implement procedures to standardize planning for addressing PA hazard mitigation at the joint field offices, for example, by requiring FEMA and state officials to develop a memorandum of understanding outlining how they will prioritize and address hazard mitigation following a disaster as it did through prior policy. (Recommendation 4) The FEMA Assistant Administrator for Recovery, in coordination with the Associate Administrator of the Federal Insurance and Mitigation Administration, should develop performance measures and associated objectives for the new delivery model to better align with FEMA’s strategic goal for hazard mitigation in the recovery process. (Recommendation 5) Agency Comments and Our Evaluation We provided a draft of this report to DHS and FEMA for review and comment. DHS provided written comments, which are reproduced in appendix III. In its comments, DHS concurred with our recommendations and described actions planned to address them. FEMA also provided technical comments, which we incorporated as appropriate. With regard to our first recommendation, that FEMA complete a workforce staffing assessment that identifies the number of staff needed at joint field offices, Consolidated Resource Centers, and FIMA’s hazard mitigation cadre, DHS stated that PA, in coordination with the Field Operations Directorate and FIMA, will continue to refine and evaluate staffing needs and update the cadre force structures under the new delivery model. DHS estimated that this effort would be completed by June 28, 2019. This action, if fully implemented, should address the intent of the recommendation. With regard to our second recommendation, that FEMA establish controls for tracking FAC-Trax capabilities to ensure the new information system meets users’ needs, DHS stated that Recovery program managers will update the FAC-Trax Requirements Management Plan and the FAC-Trax Release Plan to ensure the tracking and traceability of FAC-Trax functional and operational requirements. DHS estimated that this effort would be completed by January 31, 2018. This action, if fully implemented, should address the intent of the recommendation. With regard to our third recommendation, that FEMA establish systems testing criteria to assess the development of FAC-Trax; and define the roles and responsibilities, and sequence and schedule for system integration, DHS stated that Recovery program managers will update the FAC-Trax System Integration Plan to include integration with the Deployment Tracking System, Enterprise Data Warehouse, Preliminary Damage Assessment interface, and State Grants Management system interface. DHS estimated that this effort would be completed by June 29, 2018. This action, if fully implemented, should address the intent of the recommendation. With regard to our fourth recommendation, that FEMA implement procedures to standardize planning for addressing PA hazard mitigation at the JFO, DHS stated that PA will update current process documents or develop new documents to better incorporate mitigation into the operational planning phase of the new delivery model. DHS estimated that this effort would be completed by July 31, 2018. This action, if fully implemented, should address the intent of the recommendation. With regard to our fifth recommendation, that PA coordinate with FIMA to develop performance measures and associated objectives for the new delivery model that better align with FEMA’s strategic goals for hazard mitigation in the recovery process, DHS stated that PA will reconvene the PA-Mitigation working group to develop and refine PA related hazard mitigation performance measures. DHS estimated that this effort would be completed by June 29, 2018. This action, if fully implemented, should address the intent of the recommendation. We are sending copies of this report to the Secretary of Homeland Security and interested congressional committees. If you or your staff have any questions about this report, please contact me at (404) 679-1875 or CurrieC@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Selected Prior Work Related to Federal Emergency Management Agency’s (FEMA) Public Assistance (PA) Program Appendix II: Assessment of Information Technology Management Controls for the FEMA Applicant Case Tracker (FAC-Trax) Appendix II: Assessment of Information Technology Management Controls for the FEMA Applicant Case Tracker (FAC-Trax) Table 2 shows details on the Federal Emergency Management Agency (FEMA) Public Assistance (PA) program office’s implementation of key practices across four information technology (IT) management control areas for its new information system, the FEMA Applicant Case Tracker (FAC-Trax). PA developed FAC-Trax as a web-based project tracking and case management system to supplement the Emergency Management Mission Integrated Environment (EMMIE) and help resolve long-standing information sharing challenges. To determine the extent to which the FAC-Trax program office implemented IT management controls, we reviewed documentation from the FAC-Trax program and compared it to key management best practices, including the Software Engineering Institute’s Capability Maturity Model® Integration for Acquisition and Development, the Project Management Institute’s Guide to the Project Management Body of Knowledge (PMBOK® Guide), and the Institute of Electrical and Electronics Engineers’ Standard for Software and System Test Documentation. We assessed the program as having fully implemented a practice if the agency provided evidence that it fully addressed the practice; partially implemented if the agency provided evidence that it addressed some, but not all, portions of the practice; and not implemented if the agency did not provide any evidence that it addressed the practice. Table 2. Public Assistance (PA) Program Office’s Implementation of Key Information Technology Management Controls for FAC-Trax PA program officials developed an acquisition plan for FAC-Trax identifying the capabilities the system is intended to deliver, the acquisition approach, and acquisition objectives. Additionally, program officials developed a capability development plan outlining a strategy for the program to obtain approval to acquire FAC-Trax. Lastly, program officials developed a systems engineering plan describing the program’s scope and its framework for using an Agile development approach, as well as a deployment, support, and maintenance plan for FAC-Trax. PA program officials developed an acquisition program baseline detailing FAC-Trax’s cost parameters and a life-cycle cost estimate for the system. As of May 2017, the life- cycle cost estimate for FAC-Trax through fiscal year (FY) 2023 is approximately $19.3 million. PA program officials updated the life-cycle cost estimate for FYs 2016 and 2017 after price negotiations with the FAC-Trax contractor, and will continue to update the estimate as annual budgets are approved, according to the Integrated Logistic Support Plan. The contracting officer’s representative for FAC-Trax performs a cost review at the end of each month, according to program officials. Furthermore, the contractor’s weekly status report includes information on the number of hours worked and the percent of contract value spent. Program officials also review program costs with Office of Response and Recovery, PA, Office of the Chief Information Officer (OCIO), and other program office stakeholders during a weekly program review. PA program officials developed an acquisition program baseline detailing FAC-Trax’s schedule parameters, as well as an integrated master schedule for the system. The integrated master schedule identifies tasks, major milestones, and task dependencies. The PA program manager reviews and updates the integrated master schedule on a weekly basis. Program officials also review FAC-Trax’s schedule with Office of Response and Recovery, PA, OCIO, and other program office stakeholders during a weekly program review. PA program officials identified the knowledge and skills needed to carry out the program in FAC-Trax contract documentation and the capability development plan. Specifically, program officials included an attachment to the FAC-Trax contract listing the required labor categories and corresponding functional position descriptions. Program officials also described the role, position type, minimum grade, and minimum certification for required personnel resources for the acquisition, development, and implementation of FAC-Trax. PA program officials developed, reviewed, and maintained project planning documents and obtained commitment from relevant stakeholders. For example, program officials reviewed and updated the integrated master schedule and costs on a weekly and monthly basis, respectively. Further, program officials reviewed the status of project elements, such as the schedule, quality and technical issues, stakeholders, staffing, cost, and risks, with Office of Response and Recovery, PA, OCIO, and other program office stakeholders during a weekly program review. PA program officials also established tactical, functional, and stakeholder groups, as well as an Integrated Product Team to support and oversee the development of FAC-Trax. FEMA’s Recovery Technology Programs Division (RTPD) has a division-level risk management plan that serves as guidance for all Recovery systems, including FAC- Trax. Program officials identified key risks that could negatively affect FAC-Trax work efforts in RTPD’s “risk register”—an online site used to track risks, issues, and mitigating actions for the division and each program office. Program officials also identified five technical, cost, and schedule risks in the FAC-Trax acquisition plan. Program officials included one of these risks in the risk register, while the remaining four were managed outside of the register. As of May 2017, program officials had identified 13 risks in its risk register—four open and nine closed. The four open risks were (1) limited subject matter expert engagement during requirements development, (2) vacancies in program management office support positions, (3) unresolved service level agreement support and funding issues, and (4) the loss of the authority to operate due to a Trusted Internet Connection that is not compliant with Department of Homeland Security security policy. Program officials evaluated and categorized the identified risks based on the probability of occurrence and scope, schedule, and cost impacts. These four points of measurement are used to calculate an overall risk score. The risk score helps program officials determine a risk’s risk rating—low, medium, or high. For example, program officials reported that two of its open risks have a “medium” risk rating—meaning the risk has the potential to slightly impact project cost, schedule, or performance. In addition, program officials detailed the risk category, probability, and impact for the five risks identified in the FAC-Trax acquisition plan. Program officials developed risk mitigation and contingency plans for each risk in the risk register. For example, program officials planned to mitigate the open risk concerning subject matter expert engagement, by identifying and engaging with appropriate subject matter experts through requirements development workshops scheduled in advance of the sprint they are to support, and monitoring the development of user stories to identify any issues that may cause delays. In addition, program officials described the risk management plan and responsible officials for the five risks identified in the FAC-Trax acquisition plan. PA program officials review and update program risks during a monthly program meeting. Program officials also review program risks with Office of Response and Recovery, PA, OCIO, and other program office stakeholders during a weekly program review. Furthermore, the FAC-Trax contractor provides a weekly status update which includes a section on identified risks. Program officials established re-evaluation dates and recorded updates, including any actions taken, for each risk in the risk register. In addition, program officials were able to provide updates on the four risks identified in the FAC-Trax acquisition plan and managed outside of the register. According to PA officials, these risks were addressed and closed by the approval of program planning documents, such as the mission needs statement, concept of operations, and operational requirements document, following the solutions engineering review, which demonstrates the readiness of the program to proceed with the procurement, in September 2016. Program officials established a requirements management plan outlining how it captures, assesses, and plans for FAC-Trax enhancements, and established a change control process to review, prioritize, and verify user requests for changes to the system and feedback. As of May 2017, the PA program office received 734 change requests related to FAC-Trax, of which program officials completed 420 changes and planned to address an additional 277 entries. PA program officials also facilitated workshops to gather requirements for specific user groups and obtained additional requirements for FAC-Trax through customer feedback on a temporary technology tool— an Access database referred to as the Public Assistance Recovery Information System—used to support an early stage of the new model implementation. Further, program officials developed a functional requirements document outlining the high-level functional and operational requirements for FAC-Trax. PA program officials developed a concept of operations for FAC-Trax detailing operating concepts and scenarios for each phase of the PA preaward process. Program officials also detailed the workflow, phases, business functions, and data inputs and outputs for the re-engineered PA process in FAC-Trax’s functional requirements document. In March 2017, program officials developed a standard template to describe the process, tasks, and data inputs and outputs for specific system capabilities. As part of the change control process, PA program officials meet three times a week to discuss and prioritize change requests. Specifically, program officials review submissions to the change control form to ensure completeness, validate impacts and root cause, and research details for incoming requests. PA program officials also follow up with users to understand and verify requirements. In March 2017, program officials developed a standard template to capture acceptance criteria for specific requirements. However, PA program officials do not track system enhancements back to the high-level requirements identified in FAC-Trax’s operational and functional requirements documentation and performance work statement. PA program officials identified system requirements and constraints in the FAC-Trax concept of operations and functional and operational requirements documents. Further, through its change control process, program officials collect suggestions, issues, and feedback on FAC-Trax and system enhancements from stakeholders, identify risks for change requests, and balance prioritized requirements and estimated level of efforts with projected costs prior to each sprint. In March 2017, program officials developed a standard template to analyze and document the urgency and need for specific requirements. PA program officials and the FAC-Trax contractor established a testing and evaluation plan for the system, developed acceptance criteria for user stories, and obtained feedback from users during and after testing. The testing process concludes with user acceptance testing (UAT). If a change request fails during UAT or a new requirement is discovered during development, the PA program will capture the failed request or new requirement in the product backlog for implementation in a future product release. Key practices Systems testing and integration Developing test plans and test cases PA program officials and the FAC-Trax contractor tested and evaluated the system during development. The FAC-Trax test plan identifies the method and strategy to perform the testing, including the necessary tasks, testing parameters, and the roles and responsibilities of the individuals responsible for testing. However, program officials did not develop system testing criteria to evaluate FAC-Trax. A key feature of Agile software development is the “definition of done”—a set of clear, comprehensive, and objective criteria, that the government should use to evaluate software after each iteration of development. PA program officials developed a systems integration plan in June 2017 that identifies potential integration of FAC-Trax and four FEMA systems, including the Emergency Management Mission Integrated Environment. Specifically, the plan includes data requirements and standards; descriptions of the four systems FEMA plans to integrate with FAC-Trax and the proposed relationship for each connection; and security and access management requirements. In addition, program officials included a description of how integration problems are to be documented and resolved in FAC-Trax development and test plans. However, the systems integration plan does not define roles and responsibilities of all participants for system integration activities or establish a sequence and schedule for every integration step for the four FEMA systems. ● Fully implemented: The agency provided evidence that it fully addressed this practice. ◐ Partially implemented: The agency provided evidence that it addressed some, but not all, portions of this practice. ◌ Not implemented: The agency did not provide any evidence that it addressed this practice. Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgements In addition to the contact named above, Chris Keisling (Assistant Director), Amanda R. Parker (Analyst-in-Charge), Mathew Bader, Allison Bawden, Anthony Bova, Eric Hauswirth, Susan Hsu, Rianna Jansen, Justin Jaynes, Tracey King, Matthew T. Lowney, Heidi Nielson, Claire Peachey, Brenda Rabinowitz, Ryan Siegel, Martin Skorczynski, Niti Tandon, Walter K. Vance, James T. Williams, and Eric Winter made key contributions to this report.
Why GAO Did This Study FEMA, an agency of the Department of Homeland Security (DHS), has obligated more than $36 billion in PA grants to state, local, and tribal governments to help communities recover and rebuild after major disasters since 2009. Further, costs are rising with disasters, such as Hurricanes Harvey, Irma, and Maria in 2017. FEMA recently redesigned how the PA program delivers assistance to state and local grantees to improve operations and address past challenges identified by GAO and others. FEMA tested the new delivery model in selected disasters and announced implementation in September 2017. GAO was asked to assess the redesigned PA program. This report examines, among other things, the extent to which FEMA's new delivery model addresses (1) past workforce management challenges and assesses future workforce needs; and (2) past information sharing challenges and key IT management controls. GAO reviewed FEMA policy, strategy, and implementation documents; interviewed FEMA and state officials, PA program applicants, and other stakeholders; and observed implementation of the new model at one test location following Hurricane Matthew in 2016. What GAO Found The Federal Emergency Management Agency (FEMA) redesigned the Public Assistance (PA) grant program delivery model to address past challenges in workforce management, but has not fully assessed future workforce staffing needs. GAO and others have previously identified challenges related to shortages in experienced and trained FEMA PA staff and high turnover among these staff. These challenges often led to applicants receiving inconsistent guidance and to PA project delays. As part of its new model, FEMA is creating consolidated resource centers to standardize and centralize PA staff responsible for managing grant applications, and new specialized positions, such as hazard mitigation liaisons, program delivery managers, and site inspectors, to ensure more consistent guidance to applicants. However, FEMA has not assessed the workforce needed to fully implement the new model, such as the number of staff needed to fill certain new positions, or to achieve staffing goals for supporting hazard mitigation on PA projects. Fully assessing workforce needs will help to ensure that FEMA has the people and the skills needed to fully implement the new PA model and help to avoid the long-standing workforce challenges the program encountered in the past. FEMA designed a new PA information and case management system—called the FEMA Applicant Case Tracker (FAC-Trax)—to address past information sharing challenges, such as difficulties in sharing grant documentation among FEMA, state, and local officials and tracking the status of PA projects, but additional actions could better ensure effective implementation. Both FEMA and state officials involved in testing of the new model stated that the new information system allows them to better manage and track PA applications and documentation, which could lead to greater transparency and efficiencies in the program. Further, GAO found that this new system fully addresses two of four key information technology (IT) management controls—project planning and risk management—that are necessary to ensure systems work effectively and meet user needs. However, GAO found that FEMA has not fully addressed the other two controls—requirements development and systems testing and integration. By better analyzing progress on high-level user requirements, for example, FEMA will have greater assurance that FAC-Trax will meet user needs and achieve the goals of the new delivery model. What GAO Recommends GAO is making five recommendations, including that FEMA assess the workforce needed for the new delivery model and improve key IT management controls for its new information sharing and case management system, FAC-Trax. DHS concurred with all recommendations.
gao_GAO-18-409
gao_GAO-18-409_0
Background Roughly two-thirds of domestic energy supplies are transported through over 2.6 million miles of pipelines throughout the United States. These pipelines carry hazardous liquids and natural gas from producing wells to end users (residences and businesses). Natural gas, which is combustible, accounts for 99.8 percent of all gas distributed in the United States. Other combustible gases transported by pipeline include hydrogen, landfill gas, synthetic gas, and propane. Within this nationwide system, three main types of pipelines serve different purposes and users (see fig. 1): Gathering pipelines. The estimated 11,500 miles of onshore gas gathering pipelines subject to PHMSA regulation collect natural gas from wells in production areas. These pipelines then typically transport the gas to processing facilities, which in turn refine it and send the gas to transmission pipelines. Gathering pipelines range in diameter from about 2 to 12 inches and operate at pressures that range from about 5 to 1,400 pounds per square inch (psi). These pipelines tend to be located in rural areas but can also be located in urban areas. PHMSA estimates that another 230,000 miles of gas gathering pipelines are not subject to federal regulation based on their generally rural location and low operating pressures. Transmission pipelines. The estimated 298,000 miles of onshore transmission pipelines carry natural gas, sometimes over hundreds of miles, to communities and large-volume users (e.g., factories). Transmission pipelines tend to have the largest diameters and pressures of any type of pipeline, generally ranging from 12 inches to 42 inches in diameter and operating at higher pressures ranging from 400 to 1,440 psi. Distribution pipelines. The estimated 2,170,000 miles of natural gas distribution and service pipelines transport natural gas from transmission pipelines to residential, commercial, and industrial customers. These pipelines tend to be smaller, sometimes less than 1 inch in diameter, and operate at lower pressures, from 0.25 to 100 psi. A specific pipeline only carries one type of gas. These gases may be colorless and odorless, which is why odorizing them may be necessary to safely alert people of a leak. All odorants used in the United States contain sulfur. According to PHMSA officials, there are nine primary sulfur-based odorants used domestically for transporting combustible gas, all but one contain mercaptan—a type of chemical with a distinctive sulfur smell—which is blended with other chemicals for stability. Pipeline operators select the odorant blend that works best for their pipeline network. Distribution pipeline operators add the odorant to their gas, usually at the “city gate”, or the place where transmission pipelines connect to a distribution pipeline network. The odorant is transported and stored in a concentrated liquid form that has a strong smell, is flammable, and is toxic. The odorant is injected into the gas stream at the “city gate” odorization station and vaporizes into the gas. In its diluted form, the odorants are nontoxic. PHMSA, within the Department of Transportation (DOT), administers DOT’s national regulatory program to ensure the safe transportation of natural gas by pipeline. PHMSA oversees and enforces pipeline operators’ compliance with federal odorization requirements for interstate pipelines, which are primarily transmission pipelines. Most states have agreements with PHMSA to oversee and enforce pipeline operators’ compliance with federal requirements—including odorization requirements—for intrastate pipelines, which are primarily distribution pipelines. These states may also impose safety requirements that are more stringent than federal requirements. Under the current regulatory system, most gathering pipelines are not subject to federal safety requirements, based on their location. Only gathering pipelines close to populated areas or waterways are currently subject to federal requirements. In March 2012, we reported that land use changes have resulted in development encroaching on existing gathering pipelines and the increased extraction of oil and natural gas from shale deposits has resulted in the development of new gathering pipelines, some of which are larger in diameter and operate at higher pressure than older pipelines. Therefore, we recommended that PHMSA collect data on gathering pipelines to help determine whether to expand regulation of these pipelines. In April, 2016, PHMSA issued the Gas Transmission and Gathering Notice of Proposed Rulemaking that would: 1) require all gas gathering pipeline operators to submit operating and accident data to PHMSA, 2) more clearly define “gathering pipeline” to better identify pipelines subject to PHMSA’s requirements, and 3) increase the number of gathering pipeline miles under PHMSA’s jurisdiction. PHMSA estimates that the new rule would increase the number of gathering pipeline miles with reporting requirements by 344,000 and the number of gathering pipeline miles subject to additional safety measures by almost 70,000. The overall framework for federal gas pipeline regulations—including odorization requirements—is designed to mitigate risk. All pipelines regulated by PHMSA are required to meet uniform, minimum safety standards. Regarding odorization, these minimum standards prescribe that a combustible gas must be odorized so that at a concentration in air of one-fifth of the lower explosive limit, the gas is readily detectable by a person with a normal sense of smell. The proximity of pipelines to populated areas, where leaks present the greatest risk, determines whether or not the gas needs to be odorized. Since 1970, PHMSA has categorized pipelines into four classes based on their proximity to populated areas to determine the odorization requirements for gas transported by distribution and transmission pipeline. Class 1 locations are in rural areas and Class 4 locations are in densely populated areas (see table 1.). All combustible gases transported by distribution pipelines are required to be odorized because these pipelines are primarily in populated areas. Some transmission pipelines in highly populated—Class 3 and 4—areas are also required to be odorized. In addition, PHMSA has a supplemental risk-based regulatory program termed “integrity management” for pipelines in “high-consequence areas” where an incident would have greater consequences for public safety or the environment. Integrity management has been a part of PHMSA’s risk- based regulatory approach for natural gas transmission pipelines since 2004, and for natural gas distribution pipelines since 2011. The risk- based integrity management programs for natural gas transmission pipelines require operators to systematically identify and mitigate risks to pipeline segments located in high-consequence areas. For example, in these areas operators must monitor their pipelines for signs of corrosion and repair corroded lines within a specified period of time. High- consequence areas for natural gas pipelines include highly populated or frequently used areas, such as parks. These areas may overlap with Class 3 or Class 4 locations. The integrity management program for distribution pipelines applies to all distribution pipelines due to their proximity to populated areas. Officials and Stakeholders Said That Odorizing Gas in Pipelines Improves Public Safety, but Can Impede Some Industrial Processes Pipeline Gas Odorization Facilitates Early Detection, Particularly in Populated Areas Almost all officials and stakeholders we interviewed and the state pipeline safety officials we surveyed told us that the advantage of using sulfur- based odorants to odorize combustible gas transported by pipeline is public safety. Sulfur-based odorants have a low-odor threshold, so are easily detected at low concentrations. With a smell similar to that of rotten eggs, this odor is particularly advantageous when used in distribution pipelines that are located in areas where people congregate (e.g., homes, businesses and hospitals). If individuals smell an odorant, they can call emergency services and alert those nearby of a potential gas leak, possibly helping to prevent an explosion that could result in the loss of life and property. According to federal regulations, all local distribution companies must conduct outreach to educate the public and others on what to do when they smell a gas leak. To this end, the 2017 American Gas Association Odorization Manual (manual) states that some local distribution companies have gone beyond placing the traditional scratch-and-sniff insert in customers’ billing statements—to inform them about gas leaks and odor—to implementing “Smell Gas Act Fast!” campaigns. According to the manual, these campaigns are designed to better educate the public on the smell and nature of natural gas, along with the need to quickly take action if the odor is detected. Responding immediately to the smell of natural gas can help to prevent possible accidents. For example, when authorities were reportedly called to a Rockville, Maryland home in November 2017 to investigate an odor from a natural gas leak, authorities evacuated several nearby homes as a safety precaution in the event of an explosion, until the source of the leak could be identified and addressed. While nearly all stakeholders we interviewed said that public safety was the key advantage associated with odorizing combustible gases (in particular, combustible gases transported by distribution pipeline), some experts expressed differing opinions on the use of handheld electronic combustible gas detection devices as an alternative to detect gas leaks. According to one expert, these devices are better suited to detect gas at levels much lower than an individual’s sense of smell would allow. This expert also noted that odor does not wake a sleeping individual so a gas leak could go undetected for hours. However, a second expert noted that during his experience with pipeline accident investigations over the past 40 years, he was aware of about 10 cases in which deceased individuals were found after a gas leak accident holding a portable combustible gas detector because (1) the device may not have indicated the presence of gas in one location while a nearby location may have been explosive due to a gas leak; or (2) the user may not have been properly trained on the instrument’s limitations to identify a safe area. Accordingly, that expert stated that odorization is the most effective safety method for alerting the public of a possible gas leak. Additionally, a third expert noted that (1) electronic detectors can be difficult to place in certain areas and (2) odorants allow the public to quickly detect gas leaks without acquiring or maintaining external equipment. The Primary Disadvantages Officials and Stakeholders Cited Are Odor Removal for Some Industries and False Alarms The most common disadvantage of sulfur-based odorants cited by officials and stakeholders we contacted is the need to remove the odorant for some industrial processes. Officials from both federal safety regulatory agencies we interviewed (PHMSA and NTSB); approximately half of state pipeline safety officials surveyed; and about half of the stakeholders interviewed reported that sulfur-based odorants used in transmission pipelines can cause an adverse chemical reaction during processing for some industries. For example, sulfur in natural gas can be detrimental in the production of electricity, fertilizer, and glass because it interferes with the catalyst used during production. PHMSA and NTSB officials and about half of the stakeholders said that before these items are produced, operators must remove any added (or naturally occurring) sulfur from their combustible gas, adding another step to production. One expert and three stakeholders told us that removing the odorant also resulted in added cost for some operators. However, because most transmission pipelines are in less populated areas and not odorized, many manufacturers currently receive unodorized gas from transmission pipelines and do not need to remove odorant, according to the industry associations we interviewed. In addition, some stakeholders warned that accidental spills of concentrated odorant, using more odorant than needed, or releasing excessive amounts of odorant during operators’ maintenance activities can lead to false alarm calls. One pipeline operator told us that an employee spilled odorant on a glove and the public made several false alarm calls due to the odorant’s potent smell as the employee drove through town with the glove on the back of a truck. Officials from PHMSA, an official from a pipeline safety organization and representatives from two pipeline industry associations told us that the public could get accustomed to these types of odorant leaks and begin to ignore them or have a false sense of security when a real gas leak does occur. For example, the official from the pipeline safety organization told us that he has heard of at least one location where odorant leaks frequently occurred, and the public began to ignore the smell. Additionally, under certain conditions, sulfur-based odorants can be hazardous to human health and the environment. A few stakeholders told us that odorants released in excessive amounts may cause health concerns. For example, during a presentation before the Pipeline Safety Trust, a Los Angeles County public health official stated that it appears a sulfur-based odorant was related to public health complaints made in 2015 after a 4-month long natural gas leak from a natural gas storage facility in California’s Aliso Canyon. Many of the reported symptoms matched those made after a 2008 natural gas storage tank leak in Alabama, which included respiratory problems; eye, nose, and throat irritation; headache; nausea; and dizziness. While at least one study has been conducted and another is planned on the long-term effects of sulfur- based odorants on human health, no direct cause and effect relationships have been established. Finally, a few stakeholders noted potential environmental hazards regarding the use of odorants. For example, one stakeholder told us that odorants can become a hazardous waste depending on the quantity used and the amount of time the chemical remains in one location prior to use; one expert and another stakeholder noted that sulfur-based odorants when spilled may contaminate waterways; and four experts and two stakeholders warned that when combusted, sulfur-based odorants can produce acid rain. Also, according to PHMSA officials, these odorants are both toxic and flammable in their concentrated state. However, none of the stakeholders provided specific examples of when an odorant caused environmental damage. Officials and Stakeholders Had Mixed Views on Need to Modify Odorization Requirements Many Officials and Stakeholders Agreed That Federal Distribution Pipeline Odorization Regulations Do Not Need to be Modified General consensus exists among those we spoke with (including federal regulatory and safety officials, experts identified by the National Academies, and industry stakeholders) that federal requirements to odorize all gases in distribution pipelines are sufficient as written and do not need to be modified. PHMSA and NTSB officials we interviewed and many commenting stakeholders articulated this view. In addition, state pipeline officials we surveyed generally did not indicate a need to change federal regulations for odorizing distribution pipelines. Due to the proximity of distribution pipelines to areas where people live and work, officials, experts, and stakeholders we interviewed emphasized the importance of odorizing gas in distribution pipelines to reduce the safety risk to the public. As we have previously reported, the operating characteristics of distribution pipelines make odorant a key factor in reducing safety risk. In 2012 we reported that distribution pipelines operate at lower pressures, so pipeline failures are more likely to involve slow leaks rather than explosive ruptures. Leaking gas can accumulate in confined spaces, or migrate away from the pipeline until it finds an ignition source and potentially causes injury, death, and/or property damage. These slow leaks are difficult to see or hear, so odorants provide a critical warning to call emergency services and inform those nearby of a potential gas leak before it ignites. Many Officials and Stakeholders Agreed That Odorizing Gathering Pipelines Could Be Technically Challenging with Little Added Safety Benefit Of those we interviewed or surveyed, about half of stakeholders and a third of state pipeline safety officials did not indicate a need to modify existing regulations for odorizing gas in gathering pipelines. Further, a few commenting experts said odorizing those pipelines would be technically challenging. According to the experts, technological challenges stem from the fact that gas contains natural sulfur at many of the wells where gathering pipelines collect the raw gas. The natural sulfur in the raw gas could counteract the added chemical sulfur odorant, masking the smell of each and lowering the effectiveness of the odorant. Further, one stakeholder said that odorizing gathering pipelines would be logistically difficult and expensive given the number of wells that would each need an odorization station. For example, according to this stakeholder, there are roughly 500,000 gas wells nationwide and each odorizer would cost $2,000 as a capital investment. In addition, this stakeholder said that any safety benefit of adding odorant would be limited because most gas wellheads and gathering pipelines are located in sparsely populated rural areas. While the majority of stakeholders and state survey respondents did not see a need to odorize gas in gathering pipelines, a third of the state safety officials and three other stakeholders said all gathering pipelines should be odorized for additional safety regardless of any technical challenge. However, requiring all gathering pipelines to be odorized at the federal level would have to be consistent with federal pipeline safety regulations. According to the safety regulations, a risk assessment, including an assessment of the benefits and costs of proposed regulatory standards, is required to be considered in any decision on whether to impose a new safety standard. According to PHMSA officials, they do not have the data to report on any incidents on gathering pipelines where odorant may have made a difference. Moreover, PHMSA officials stated that they do not have the data to formulate an educated opinion or viewpoint as to the need to odorize gathering pipelines. To address this lack of data, the Pipeline Safety -Safety of Gas Gathering Pipelines rulemaking, if approved, will provide PHMSA with more data on gas gathering pipeline infrastructure and incident data. According to PHMSA officials, the data collected will inform PHMSA on the best path forward regarding further regulation of gas gathering pipelines, including the need for odorization. Officials anticipate publishing the final rule in summer 2019. Officials’ and Stakeholders’ Views Differed on Need to Odorize Transmission Pipeline Gas Officials, stakeholders and survey respondents generally disagreed about the need to odorize all transmission pipelines. Officials from NTSB as well as about half of the stakeholders we contacted said the current regulations for odorizing gas in certain transmission pipelines in populated areas were sufficient. Additionally, NTSB officials said they were not aware of incidents where odorants in a transmission pipeline would have alerted the public in time to prevent the incident. These officials and stakeholders generally said that odorizing gas in transmission pipelines is not an effective means of reducing the risk of an incident. For example, one stakeholder said that at the typically high pressure at which most transmission pipelines operate, even a relatively small hole in the pipeline would cause a rupture that would excavate the earth around it so people would hear and see the evidence of the leak. Some experts also said that odorizing gas in all transmission pipelines could have increased costs and other challenges for pipeline operators or gas end users. For example, one expert said that odorizing all gas transported in the transmission pipeline system would require tens of thousands of odorization facilities. This expert also said that if gas is odorized in transmission pipelines, some industries currently receiving unodorized gas will be affected negatively because they either must incur the additional processing and cost of removing the odorant or find new ways to receive gas that is not odorized. Further, PHMSA officials and representatives from the Interstate Natural Gas Association of America said that the integrity management program for transmission pipelines provides more preventative, risk-based safety management than odorants, which rely on reducing risk through early detection of a leak that has already occurred. The integrity management program requires operators to assess the integrity of their pipelines within high consequence areas—which, by definition, encompass Class 3 and 4 locations—on a regular basis using any of three approved methods: (1) running an in-line inspection tool, or “smart pig”, through the pipeline to detect anomalies, such as corrosion, that can cause leaks (2) conducting a direct assessment using data and direct examination of the pipeline from aboveground to identify problem areas, or (3) hydrostatically testing a portion of the pipeline by removing the gas product, replacing it with water, and increasing the pressure of the water above the maximum allowable operating pressure of the pipeline to test its integrity. These inspection methods are designed to detect issues that could cause a gas leak before the leak occurs. Following the assessments, pipeline operators are required to prioritize and repair anomalies found during assessments. While odorants could be added in addition to integrity management requirements, PHMSA officials said that integrity management more effectively helps assure an acceptable level of safety for transmission pipelines than an odorant could because the risk assessments focus on the potential causes of leaks and ruptures for these types of pipelines and, therefore, are more preventative than odorizing. In a September 2006 report, we found that PHMSA’s gas pipeline integrity management program benefits public safety by incorporating risk-based management principles into pipeline safety oversight, and in June 2013, we reported that transmission pipeline operators were conducting periodic assessments and making repairs to pipelines in high consequence areas. Transmission pipeline operators are also required through the integrity management program to proactively take measures to reduce the risk or potential impact of an accident. Based on inspections of interstate transmission operators’ integrity management programs, PHMSA officials noted that—while transmission pipeline operators could opt to odorize gas in a transmission pipeline—they are not aware of any operator to date that has concluded that odorizing transmission pipelines was necessary to reduce risk. Instead, operators use tools such as electronic leak detection and remotely-controlled valves to detect potential leaks and shut down the pipeline if needed. While the preventative safety practices required under the gas transmission pipeline integrity management program are designed to mitigate risk without requiring the use of odorant, officials from two states and one stakeholder questioned the sufficiency of integrity management practices. However, as part of the ongoing two rulemakings: the Pipeline Safety: Safety of Gas Transmission Pipelines, MAOP Reconfirmation, Expansion of Assessment Requirements and Other Related Amendments and the Pipeline Safety - Safety of Gas Transmission Pipelines, Repair Criteria, Integrity Management Improvements, Cathodic Protection, Management of Change, and Other Related Amendments Rulemaking, PHMSA also plans to strengthen and expand requirements for the gas integrity management program for transmission pipelines. For example, PHMSA plans to expand the requirements for periodic assessments and subsequent repairs to additional pipeline mileage beyond that located in high consequence areas. PHMSA plans to publish these rulemaking in March and June, 2019, respectively. The 2016 PIPES Act includes a mandate for GAO to review PHMSA’s gas integrity management program as soon as PHMSA publishes the final rule. In contrast to the opinions expressed above that transmission pipeline odorization requirements are sufficient, 31 of 49 state pipeline safety officials surveyed responded that these requirements are not stringent enough for safety. Of these respondents, several said that exemptions that currently apply to some operators with transmission pipelines in Class 3 and Class 4 locations should not be allowed. There are several exemptions, determined by the overall class location of the pipeline or end use of the gas. For example, one class location exemption is that when at least 50 percent of the length of the pipeline downstream from the more populated Class 3 or Class 4 location is in a less populated Class 1 or Class 2 location, the gas does not need to be odorized (see fig. 2). Eliminating the current regulatory exemptions for certain transmission pipelines and requiring operators to odorize all gas transported by transmission pipeline through Class 3 or Class 4 locations may not be cost-beneficial under federal regulatory risk assessment principles, which direct the agency to assess the benefits and costs of changes in regulatory standards. For example, while four states cited increased public safety as the reason to remove the existing exemption, PHMSA and NTSB officials could not identify any incidents where odorants in a transmission pipeline would have prevented damage. In addition, as described above, some experts told us that removing the exemptions could have increased costs and other challenges for pipeline operators or gas end users. PHMSA officials also said that the definition of a high- consequence area under the gas integrity management program encompasses all Class 3 and Class 4 locations, so the risk-based preventative measures required under that program apply to the areas exempt from odorization requirements. Agency Comments We provided a draft of this product to DOT for review and comment. DOT provided technical comments that were incorporated, as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Department of Transportation, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or FlemingS@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Advantages and Disadvantages of Non-sulfur Based Odorants While our report focuses on sulfur-based odorants, which are used in the United States, we also asked experts and stakeholders about the advantages and disadvantages of non-sulfur based odorants. According to a German-based manufacturer of non-sulfur odorants, these odorants are used in some European countries, including Germany and Austria. This manufacturer also told us that the German energy industry has embraced using non-sulfur based odorants, in part, to meet German emissions regulations, as these odorants do not produce sulfur dioxide and contribute to acid rain when burned. Most of the experts and stakeholders that we interviewed were generally unfamiliar with non-sulfur based odorants. Those with some familiarity offered the following advantages and disadvantages. Advantages: Three experts and stakeholders reported that non-sulfur based odorants: have less adverse impact on the environment; for example, no acid may cost less for some operators because less product may be needed than sulfur-based odorants; and do not adversely impact some operators’ processes. Disadvantages: Four experts noted that non-sulfur based odorants: have a smell that the American public does not associate with a gas leak. Two experts commented that non-sulfur based odorants: may be chemically unstable; and can react with other compounds. Two experts noted that non-sulfur based odorants: may have a higher level of toxicity. Appendix II: Experts and Other Industry and Safety Stakeholders Interviewed by GAO Appendix III: Contact and Staff Acknowledgements Contact Susan Fleming, (202) 512-2834 or FlemingS@gao.gov. Staff Acknowledgements In addition to the individual named above, other key contributors to this report were Sara Vermillion, Assistant Director; Sarah Jones, Analyst in Charge; Jennifer W. Clayborne; Timothy J. Guinane; David K. Hooper; Delwen A. Jones; Josh Ormond; Rebecca R. Parkhurst; and Kelly L. Rubin.
Why GAO Did This Study The nation's gas pipeline network moves about 74 billion cubic feet of combustible gas to homes and businesses daily. To alert the public of a gas leak before an explosion occurs, PHMSA has different requirements for odorizing gas. All gas transported by distribution pipelines throughout communities must be odorized. Gas transported across many miles by transmission pipelines is required to be odorized only in certain populated areas. There are no requirements to odorize gas in gathering pipelines. Congress included a provision in statute for GAO to review odor requirements for all pipelines. This report presents the views of federal and state pipeline safety officials and industry and safety stakeholders on: (1) the advantages and disadvantages of odorizing combustible gases in pipelines; and (2) whether and how federal requirements for odorizing pipelines should be modified. GAO reviewed relevant regulations and reports; surveyed officials in 48 states and the District of Columbia; and interviewed PHMSA and NTSB officials. GAO also interviewed 34 stakeholders, including 14 experts identified by the National Academies, and 20 other industry and safety stakeholders. What GAO Found Pipeline and Hazardous Materials Safety Administration (PHMSA) and National Transportation Safety Board (NTSB) officials, state officials, and stakeholders GAO contacted cited safety as the main advantage to odorizing combustible gases in pipelines, primarily for distribution pipelines in densely populated areas (see figure). Specifically, adding a chemical with a distinctive odor to gas allows the public to generally detect leaks before an explosion can occur. The most frequently cited disadvantage was that commonly used sulfur-based odorants must be removed—primarily from gas in transmission pipelines—before the gas can be used in certain processes, such as producing fertilizer. While federal odorization requirements follow a risk-based approach by focusing on pipelines in populated areas, the officials and stakeholders GAO contacted disagreed on the need to modify these requirements for some pipelines. Specifically, because distribution pipelines run through populated areas, everyone GAO contacted generally agreed that these pipelines should be odorized for safety, as currently required. For gathering pipelines, the majority of officials and stakeholders did not see a need to modify regulations because these pipelines would be technically challenging to odorize and are primarily located in rural areas. However, about two-thirds of state officials and about half of stakeholders said that additional transmission pipelines should be odorized for public safety. Conversely, officials from PHMSA and NTSB and about half of the stakeholders contacted noted that, because transmission pipelines operate at high pressure and generally rupture rather than leak, it is unlikely that odorant could mitigate risk. Instead, other required safety practices—such as internal pipeline inspections—can provide more preventative, risk-based safety management, according to PHMSA officials. In this regard, PHMSA officials said that they plan to strengthen risk-based safety requirements for transmission and gathering pipelines as part of on-going rulemakings. PHMSA anticipates issuing these rules in 2019.
gao_GAO-18-20
gao_GAO-18-20_0
Background Viewed broadly, IDT refund fraud is comprised of two crimes: (1) stealing or compromising PII and (2) using stolen (or otherwise compromised) PII to file a fraudulent tax return and collect a fraudulent refund. Figure 1 presents an example of how fraudsters may use stolen PII and other information, real or fictitious (e.g., sources and amounts of income), to complete and file a fraudulent tax return and receive a refund. In this example, a taxpayer may alert IRS of IDT refund fraud. Alternatively, IRS can detect IDT refund fraud through its automated filters that search for specific characteristics as well as through other reviews of taxpayer returns. Information Sharing and Analysis Centers In May 1998, Presidential Decision Directive 63 introduced and promulgated the concept of ISACs, which help critical infrastructure owners and operators protect facilities, personnel, and customers from cyber and physical security threats and other hazards. ISACs typically collect, analyze, and disseminate actionable threat information to their members and provide members with tools to mitigate risks and enhance resiliency. ISACs have been used in other sectors such as energy, financial services, and surface transportation to facilitate coordination between public and private entities. We have reported that ISACs have developed diverse management structures and operations to meet the requirements of their respective critical infrastructure sectors. Likewise, we also have assessed federal support to fusion centers, information sharing platforms between the government and the private sector that prevent and respond to criminal and terrorist activity. ISAC characteristics differ across various sectors; however, we have reported common challenges—including information sharing—that need to be addressed for an ISAC to be successful. Barriers to information sharing may stem from practical considerations because the benefits of sharing information are often difficult to discern, while the risks and costs of sharing are direct and foreseeable. As a result, we have noted that it is important to lower the practical risks of sharing information through both technical means and policies, and to develop internal systems that are capable of supporting operational requirements without interfering with core operations. IRS’s Information Sharing and Analysis Center Mission The mission is to provide a secure platform via a sustainable public/private partnership to facilitate information sharing, consistent with applicable law, and analytics necessary to detect, prevent, and deter activities related to stolen identity refund fraud. IRS’s ISAC—the Identity Theft Tax Refund Fraud-Information Sharing and Analysis Center— is intended to improve collaboration and information sharing among IRS, states, and industry partners and began as a pilot in January 2017. (See sidebar.) Two entities operate under the ISAC umbrella. One entity is the ISAC Partnership, a collaborative organization run jointly by IRS, states, and industry partners. The other entity is the ISAC online platform, which is controlled by IRS and includes an early warning alarm system that allows states and industry partners to share information related to IDT refund fraud and schemes more quickly to better defend against fraud. Additional Information Sharing Efforts Outside of the ISAC, four other efforts have supported information sharing about potential IDT refund fraud for years. Suspicious Filer Exchange: The Federation of Tax Administrators (FTA) operates an online platform for states to share information— including record-level data—among themselves about suspected fraud. Industry Leads Program: This IRS-operated program requires tax preparation companies to perform post-filing analysis and provide, on a recurring and timely basis, information to IRS on IDT refund fraud patterns and indices as a condition of electronically filing returns. IRS then provides this information to states, which are to use the information to bolster their fraud detection and prevention efforts. External Leads Program: This IRS-operated program involves third parties such as banks or other financial institutions providing information to IRS about questionable refunds. If the questionable refund is confirmed as fraudulent, IRS requests that the financial institution return the refund. Opt-In Program: IRS operates this program that allows financial institutions to electronically reject suspicious refunds and return them to IRS and indicate why the institution is rejecting the refunds. Rapid Response Team The RRT, which began in the 2016 filing season, coordinates responses to IDT refund fraud incidents that IRS, states, or industry partners believe pose a significant and immediate threat to taxpayers or the tax system. The Information Sharing work group is responsible for managing the RRT and is led by one representative each from IRS, states, and industry. The main component of the RRT process is a call among relevant IRS, state, and industry partners to coordinate a response to the incident. IRS’s goal is to convene the call within 24 to 72 hours after an incident is discovered. The RRT process describes the next steps for the first 3 days after an incident is identified. The RRT process differs depending on whether the incident is reported by IRS, a state, or an industry partner, based on the laws governing information sharing discussed later in this report. For example, if a state identifies an incident, the RRT process indicates that the state should share that information—including Social Security numbers as appropriate—with IRS and other states on the next business day and with industry in the next 2 to 3 days. If IRS or an industry partner identifies an incident, the RRT process indicates that IRS or the industry partner should share relevant information in the next 2 to 3 days. In the 2016 filing season, the RRT was deployed for six incidents. For example, as we reported in January 2017, IRS announced in February 2016 that cybercriminals had stolen more than 100,000 e-file Personal Identification Numbers (PIN) from an online tool. Stolen e-file PINs could be used to file fraudulent federal tax returns. IRS Has Taken Significant Actions to Facilitate Information Sharing through the ISAC and RRT IRS implemented the ISAC in 2017 to facilitate information sharing among IRS, state, and industry partners—subject to disclosure prohibitions—by launching an online platform, establishing a governance structure, and recruiting members. IRS and state officials and industry representatives attributed increased trust and improved relationships to IRS’s efforts in recent years. Additionally, IRS coordinated with state and industry partners to establish the RRT in 2016, which has been initiated once thus far in 2017. IRS Actions to Implement the ISAC Include Launching the Online Platform and Recruiting Members The ISAC online platform provides two capabilities—alerts and record- level data—which facilitate information sharing. Alerts: This capability consists of alerts on potential IDT refund fraud that have been identified by IRS, states, or an industry partner and shared on the ISAC online platform. Alerts are available to all states and Security Summit partners who sign a terms of use agreement. Alerts include detailed information about identified schemes, indicators of suspicious activity, and types of accounts targeted, among other things. Alerts may also include anecdotal evidence from ISAC members who have already been targeted by this scheme. Record-level data and analysis: This capability consists of several tools to facilitate IDT refund fraud prevention and detection, including a secure data transfer tool that members can use to input IDT refund fraud data and record-level data. Record-level data may include PII or other details about suspected fraud. States and industry partners share record-level data with the ISAC. However, according to IRS officials, IRS does not due to legal restrictions. This part of the ISAC also contains, among other things, analytic reports which identify, for example, Internet Protocol (IP) addresses associated with potential fraud. This space is only accessible to full ISAC members. Information that is shared and available to be reviewed by various ISAC stakeholders is controlled by disclosure laws within the Internal Revenue Code. According to IRS officials, IRS does not contribute Federal Tax Information to the ISAC because those data are protected from disclosure under section 6103 of the Internal Revenue Code, which generally prohibits IRS from disclosing tax returns or return information. Similarly, IRS does not control or have ownership of any record-level data on the ISAC. Instead, IRS receives record-level data directly from states and industry partners through other channels such as the External Leads Program. IRS can, however, still contribute alerts that do not include record-level data. Moreover, unless exempted, section 7216 of the Internal Revenue Code prohibits disclosure or use of taxpayer information by preparers of returns and imposes criminal penalties on knowing or reckless disclosure. Disclosure of information from one preparer to another preparer or disclosure to federal, state, or local officials to inform them of activities that may constitute a crime is permitted by Department of the Treasury (Treasury) regulation. As seen in figure 2, tax preparation companies— covered under section 7216 and referred to as industry 7216—have full access to all of the information provided to the ISAC. However, financial institutions—not covered under section 7216 and referred to as industry non-7216—are not able to view record-level data submitted by, or comingled with data from, tax preparation companies. Three of the 17 industry members of the ISAC are financial institutions—non-7216 entities—and therefore have this more limited view. IRS contracted with a company to facilitate information sharing among partners. The contractor developed and manages the online platform and also analyzes data on IDT fraud, which it makes available to IRS’s ISAC members. In addition, IRS developed a governance structure for the ISAC. Figure 3 shows these and other key events. Three of IRS’s goals for the ISAC when it launched in 2017 were to (1) launch the online platform, (2) establish the governance structure, and (3) recruit new members. In terms of its first goal, as noted, the online platform became operational January 23, 2017. IRS’s contractor provided ISAC members with training on how to use the online platform and how to use the data visualization tools. (See figure 4.) The data visualization tools include charts and figures with data on trends in refund fraud. The tools are available to members of the ISAC with the exception of financial institutions that cannot view data visualization tools compiled with tax preparation company data (as noted in figure 2 earlier). The ISAC also established a community of practice (COP) that brings together fraud analysts from IRS, states, and industry partners to share leading practices. The intent is to encourage dialogue among staff involved in implementing fraud prevention strategies. In our focus groups, an industry official said that the COP has been a positive experience for industry, but most state officials said they were not familiar with the COP. In terms of establishing a governance structure, the ISAC Partnership is governed by the ISAC Senior Executive Board (Board) that consists of 15 members, with 5 representatives each from IRS, states, and industry. The Board is principally responsible for crafting mission or vision statements for the ISAC Partnership, recommending ISAC operating procedures,; and nominating new ISAC Platform participants and recommending the removal of such participants, among other responsibilities. An IRS executive official must approve any recommendation by the Board that affects the online platform. The partnership also includes three subgroups: metrics, outreach, and governance. IRS also made progress on its goal of recruiting new participants. As of November 2017, the ISAC had 24 full state members, 7 alerts-only state members, 14 tax preparation company members, and 3 financial institution members. An additional 7 states have membership pending. In total, 38 states are members (either full members or those receiving only alerts) or have membership pending. Goals moving into the 2018 filing season include increasing the participation of current members, exploring additional analytical capabilities, and establishing and refining performance metrics. Partners Attributed Improved Collaboration to the Security Summit and ISAC In our focus groups, industry representatives said that they see ISAC collaboration as critical to managing IDT threats. The ISAC is intended to go beyond other efforts, most notably in that it brings IRS, states, and industry together in equal partnership and allows for communication among all stakeholders. IRS reports over 1.8 million leads submitted to the ISAC from 14 partners. However, the number of leads does not reflect their quality. Industry representatives we spoke with in our focus groups said that they would like feedback from IRS on the usefulness of industry leads so that they can adjust their fraud filters and provide more accurate leads. These comments about the usefulness and quality of industry leads are consistent with what our prior work has found on the value of external leads. Specifically, in 2014, we recommended that IRS take the following actions on its External Leads Program: 1. provide aggregated information on both the success of external leads in identifying suspicious returns, and also emerging trends (pursuant to section 6103 restrictions), and 2. develop a set of metrics to track external leads by the submitting third party. IRS has taken steps to address these recommendations, including developing timeliness metrics for managing leads and holding six feedback sessions with financial institutions participating in the External Leads Program. As of November 2017, we are following up with industry members to determine if they consider the feedback accurate, timely, and actionable. Without such feedback, the more than 600 external parties participating in the External Leads Program do not know if the leads they provide to IRS are useful and they may not be able to assess their success in identifying IDT refund fraud or improve their detection tools. In the focus groups, both state officials and industry representatives said the relationship among IRS, states, and industry has improved as a result of increased collaboration over the last several years. As of November 2017, the ISAC had 48 members. Further, IRS officials said they think trust and the relationship between all parties has and is continuing to improve. Likewise, in the focus groups, industry officials cited benefits of improved coordination from the Security Summit. For example, one industry representative cited IRS’s pushing out communications faster because of the Security Summit, while another noted that participation in the summit has made IRS officials more accessible. However, in focus groups, a few state officials noted that because IRS is compartmentalized, they have found their interactions with IRS to be inconsistent. For example, these state officials reported some IRS units are more responsive than others and that information sometimes is not shared among IRS units. IRS Established the RRT in 2016 and Initiated the RRT Process Once in the 2017 Filing Season As part of establishing the RRT, IRS outlined the responsibilities of IRS, states, and industry to respond to significant IDT refund fraud incidents. As noted earlier in this report, the RRT was activated six times in 2016. IRS initiated the RRT once in the 2017 filing season for a data breach related to the Department of Education. In March 2017, IRS and the Department of Education responded to security concerns and removed access on https://www.fafsa.gov and https://www.StudentLoans.gov to IRS’s Data Retrieval Tool—the online process through which student financial aid applicants obtain their family’s tax information. IRS suspects that fraudsters used personal information obtained elsewhere to access the Data Retrieval Tool in an attempt to access tax information, particularly adjusted gross income. As of April 6, 2017, IRS reported that fewer than 8,000 fraudulent returns from this incident had been filed, processed, and issued refunds, but IRS estimated that about 100,000 taxpayers may have been affected. The Data Retrieval Tool was taken offline while IRS and the Department of Education made updates and will not be available for completing applications for the current school year (2017-2018). As of November 2017, taxpayers could use the Data Retrieval Tool for completing financial aid applications for the next school year (2018-2019). While IRS initiated the RRT for this incident, an industry official said that the information provided in the press release was more detailed than what was previously provided to industry partners via the RRT. The RRT is administered separately from the ISAC. According to IRS officials, they intend to eventually integrate components of the RRT into the ISAC to further streamline information sharing. Specifically, IRS envisions the ISAC serving as the primary mechanism for states and industry partners to report and escalate IDT refund fraud incidents by facilitating communication among participants. IRS does not have a timeline for this integration. The ISAC Pilot Partially Aligns with Leading Practices for Pilot Design, but IRS Does Not Have a Plan to Improve Alignment In 2016, we identified five leading practices for designing a well- developed and documented pilot program: (1): ensuring stakeholder communication, (2) establishing objectives, (3) ensuring scalability, (4) having an assessment methodology, and (5) developing a data-analysis plan. These practices enhance the quality, credibility, and usefulness of evaluations and help ensure that time and resources are used effectively. Each leading practice shares common elements but serves a unique purpose and builds on the other. For example, four of the five leading practices recommend either establishing criteria for assessing whether the pilot’s objectives have been met or developing a data plan necessary for effectively evaluating the pilot. While the ISAC pilot is in nascent stages, IRS has taken steps that partially align with key aspects of all five leading practices. (See figure 5.) Ensure appropriate two-way stakeholder communication: In 2016, we reported that it is critical that agencies identify who the relevant stakeholders are and communicate early and often to address their concerns and convey the initiative’s overarching benefits. IRS’s efforts mostly aligned with this practice because IRS included stakeholder input during the design, implementation, and preliminary stages of the data-gathering and assessment phases of the pilot. IRS, through the ISAC working group and the Board, communicated with stakeholders before, during, and after forming the ISAC. Such communication helped ensure that stakeholders were engaged and that their views were understood and incorporated. For example, in 2016, IRS’s contractor conducted a preliminary assessment and interviews to compile and present stakeholder views and aspirations for the ISAC. This process included meeting with state officials and industry partners about ISAC preferences, suggestions, concerns, and risks. According to the IRS ISAC Executive Official, ahead of the ISAC launch, IRS established several mechanisms to ensure ongoing stakeholder input, including coordinating with both state and industry trade organizations, including the FTA and the American Coalition of Taxpayer Rights, to gain their endorsement. IRS and its contractor also solicited feedback at conferences, such as FTA’s annual conferences. During a 3-day fraud simulation exercise hosted by IRS’s contractor, participants discussed partner actions, needs, and processes to inform the ISAC’s development. Additionally, IRS conducted a stakeholder analysis which documented stakeholders’ engagement in the ISAC Partnership. This is intended to inform the development of the ISAC communications plan. Finally, the ISAC’s Partnership governance structure, which includes representatives from states and industry, helps facilitate communication among stakeholders. Despite these efforts, IRS’s message about the ISAC’s benefits has not fully reached states. In our focus groups, a few state officials reported they are unclear about the benefit of the ISAC. To help improve communication, the Board invited relevant trade organizations to participate in its July Board meetings. IRS officials reported that the message about the benefits of the ISAC may not have initially reached states because it took time to build trust among state and industry partners. FTA confirmed that states may not have understood the benefits of working with IRS and industry partners and were wary of joining the ISAC. Further, IRS officials said that some trade organizations that endorsed the ISAC had differing views about the organization of the ISAC—such as who should be invited to participate—which made it challenging for IRS to effectively garner support. A few states reported in our focus groups that FTA’s endorsement was important to their decision to join the ISAC. Until IRS further communicates the ISAC’s benefit to current and potential stakeholders, IRS and the ISAC Board may face challenges in reaching their goal of increasing robust participation in the ISAC. We discuss how IRS can improve its outreach to state and industry partners later in this report. Establish well-defined, appropriate, clear, and measureable objectives: In our 2016 report, we found that well-formulated objectives help ensure that appropriate evaluation data can be collected from the outset of the pilot so that data are available for measuring performance against clear goals and standards. Broad objectives should be translated into specific researchable questions that articulate what will be assessed. Additionally, we have reported that agencies should establish measurable goals for determining when the pilot progresses from one stage to the next to improve their ability to evaluate the success of the pilot. IRS’s efforts mostly aligned with this leading practice. For example the ISAC’s charter sets forth objectives, which include (1) exchanging information among participants, (2) providing a forum for real-time responses to fraud schemes, and (3) promoting strategies to detect and prevent fraud. In February 2017, the Board established the metrics subgroup to assess the performance of the ISAC and develop metrics. The Board noted that metrics are essential for showing the value added by the ISAC compared to other efforts. The ISAC Roadmap, a planning document that outlines three developmental phases over 4 years, shows that IRS and the Board have considered an implementation plan, as well as how the online platform might evolve in the areas of program operations, infrastructure, analytics, and partner engagement. Additionally, IRS’s contractor anticipated and developed risk mitigation strategies to handle scenarios that might arise before, during, and after the ISAC’s launch and interfere with reaching the pilot’s objectives. Finally, ahead of ISAC’s launch, the contractor refined key operational attributes to help define ISAC’s full desired capabilities. However, IRS has not translated its objectives into specific, researchable questions that articulate what will be assessed. For example, one of the ISAC’s objectives is to facilitate the exchange of information among members. While IRS closely monitors members’ use of the ISAC, IRS does not have performance goals, such as desired participation levels, or a plan to assess progress towards those goals, such as members’ usage of ISAC data and tools. These are needed to ensure that appropriate evaluation data are collected during the pilot. Furthermore, IRS does not have measurable goals to determine when the pilot should progress to full implementation. In the early stages of a new program or initiative within a program, evaluation questions tend to focus on program process—on how well authorized activities are carried out and reach intended recipients. We have previously reported that common evaluation questions include the following: Is the program being delivered as intended to the targeted recipients? Have any feasibility or management problems emerged? What progress has been made in implementing changes or new provisions? According to IRS officials, the ISAC pilot is still in early stages; they did not know what to expect the first year but knew they wanted to focus on building trust and, therefore, did not set goals for participation. However, we have previously reported that without well-defined, appropriate, clear, and measurable objectives, it will be difficult to ensure appropriate evaluation data are collected and available to measure performance against the objectives and goals. In short, it will be difficult for IRS to know whether it achieved its objectives. Without knowing this, IRS will have difficulty justifying investing additional resources. Ensure scalability of pilot design: The purpose of a pilot is generally to inform a decision on whether and how to implement a new approach in a broader context. Identifying criteria or standards for identifying lessons about the pilot will help inform an agency’s decisions about scalability and when to integrate pilot activities into overall efforts. We previously reported that the criteria and standards should be observable and measureable events, actions, or characteristics that provide evidence that the pilot objectives have been met. IRS’s efforts in designing the ISAC partially aligned with this leading practice. First, IRS identified and integrated lessons learned into its pilot. For example, ahead of ISAC’s launch, IRS’s contractor identified potential capabilities of the ISAC based on lessons learned from four ISACs from other industries and a 2-day collaborative session in summer 2015. In February 2017, 1 month after the ISAC’s launch, the Board established the metrics subgroup to develop evaluation criteria to determine the extent to which the pilot objectives have been met. According to ISAC Board officials, the metrics subgroup is developing and testing metrics that the ISAC Board expects to use beginning in the 2018 filing season. The metrics are designed to measure participation in the ISAC, contribution of data or information to the ISAC, and the effectiveness of the data or information provided. IRS also took steps to improve the ISAC pilot design, which will help it scale the pilot in the future. For example, in May 2017, IRS’s contractor presented lessons learned from the 2017 filing season, including what was accomplished, what should be changed in future filing seasons, and areas for future attention to consider how well the lessons learned can be applied when the pilot is scaled up. The contractor’s presentation also outlined recommendations from a May 2017 independent assessment of the ISAC, including the current status of each recommendation and actions needed to implement them. In addition, during the July 2017 ISAC Board meeting, IRS’s contractor discussed lessons learned, and the IRS ISAC Executive Official discussed takeaways thus far from standing up the ISAC. Finally, IRS took steps to establish criteria for assessing the pilot’s performance, but these steps are primarily related to participation, access, and data contribution requirements. IRS does not have criteria that would inform decisions about the ISAC’s scalability, including when it is appropriate to include more state and industry members, how to identify additional members, or how to expand the functionalities of the online platform. For example, IRS has yet to articulate the criteria to determine the appropriate time frame for the ISAC to remain in the pilot stage and does not have a plan to decide how and when the ISAC will move from the pilot stage into full implementation. However, IRS officials have said that the ISAC will likely continue in pilot phase through the 2018 filing season. According to IRS officials, IRS had prioritized other activities and is now turning its attention to plans for scaling the pilot. Without measurable evaluation criteria that provide evidence that the ISAC pilot objectives have been met, the Board will have difficulty assessing the ISAC’s performance and making decisions about scalability. Clearly articulate an assessment methodology: In 2016, we reported that key features of an assessment methodology include a strategy for comparing the pilot’s implementation and results with other efforts; a clear plan that details the type and source of the data necessary to evaluate the pilot; and methods for data collection, including the timing and frequency. While IRS’s efforts minimally aligned with this leading practice, it has taken some steps to clearly articulate its assessment methodology. For example, according to the IRS ISAC Executive Official, IRS plans to evaluate the extent to which the revenue protected by the ISAC pilot compares to existing fraud detection and prevention efforts, including the External Leads Program. To help accomplish this, IRS took preliminary steps to collect and track metrics related to ISAC’s performance and compare ISAC’s efforts against other mechanisms to combat fraud. For example, IRS’s contractor collects and disseminates program metrics and ISAC analytics weekly, including the total number of members, leads, alerts, and Internet Protocol (IP) addresses. This is intended to help assess progress in expanding the ISAC and identifying fraud. In addition, the metrics subgroup started comparing ISAC leads against information collected from the states as part of its effort to assess ISAC data quality. However, IRS has not completed an assessment methodology and data gathering strategy that outlines the type and source of data necessary to evaluate the pilot to assess the progress in achieving each of the ISAC’s objectives, including whether the ISAC successfully facilitates the exchange of information and helps detect and prevent fraud. IRS also does not have a strategy for comparing the pilot’s implementation and results with other efforts. For example, while IRS officials expect to determine federal revenue protected by the ISAC and compare that to other efforts, IRS has not formalized this plan and IRS officials do not expect to start until at least October 2017, when the needed data become available. Additionally, according to IRS’s ISAC Executive Official, state and industry partners—who are important stakeholders in the ISAC—may not be able to track dollars protected through the ISAC. As a result, IRS may only know the federal dollars protected, while the amount protected at the state level may remain unknown. This makes it more difficult to communicate the potential benefits to states. Furthermore, the ISAC could be collecting additional data to better meet its objectives. While quantifying federal dollars protected is a key indicator of the ISAC’s success, that metric alone will not demonstrate the ISAC’s benefit and effectiveness. Without a documented strategy to compare the ISAC pilot to other efforts and a methodology that details the type and source of data necessary to evaluate the pilot—beyond the federal dollars protected by the ISAC that would otherwise have been undetected—IRS may find it difficult to assess the effectiveness of the pilot, identify areas for improvement, and demonstrate its capabilities compared with other efforts. Develop a data-analysis plan: In conjunction with a clearly articulated assessment methodology, a detailed data-analysis plan identifies who will analyze the data as well as when and how data will be analyzed to assess the pilot’s performance and draw conclusions about how to improve procedures moving forward. As we previously reported, the results will show the successes and challenges of the pilot, and in turn, how the pilot can be incorporated into broader efforts. While IRS’s efforts minimally aligned with this leading practice, it has taken some steps to measure performance at the activity level. For example, IRS worked with its contractor to regularly track and report engagement metrics; user statistics; and analytics on alerts, leads, and device IP addresses, which at times are categorized and aggregated. (See figure 4 earlier in this report for an example of the ISAC data visualization tool with illustrative data.) IRS’s contractor also surveyed ISAC members to better gauge user experience with alerts and what participants found to be most valuable on the online platform. In response to other recommendations to develop metrics for measuring ISAC’s performance and success, the contractor’s May 2017 ISAC evaluation outlined actions, including beginning to track recommended metrics and exploring means of quantifying the benefit. However, IRS has not formalized the plan to determine the amount of revenue protected nor has it developed a detailed data-analysis plan to determine how the ISAC pilot’s performance will be tracked. The ISAC’s metrics subgroup reported that it is working to develop preliminary performance metrics to benchmark the ISAC pilot’s progress. It acknowledged that metrics and a detailed analysis plan are essential to demonstrate the ISAC’s benefit. The subgroup reported it is in the process of developing them. Without a detailed data analysis and evaluation plan that identifies data sources and criteria, IRS cannot fully determine or demonstrate the pilot’s performance and challenges. As a result, IRS, its partners, and Congress will have difficulty determining the ISAC’s effectiveness and whether IRS should expand the pilot. IRS officials said they are still learning about the five leading practices for pilot design, and as noted, the ISAC at least partially aligns with each one. According to internal control standards in the federal government, an agency should formulate plans to achieve its objectives in order to meet them. Without such a plan to inform decisions about the ISAC’s benefits and performance, IRS, its partners, and Congress will have difficulty determining the effectiveness of the pilot and whether to proceed with full implementation. The ISAC Board Should Develop an Outreach Plan to Improve the Pilot IRS took actions to improve the ISAC pilot, including waiving the requirement for states to contribute data. However, IRS does not have an outreach plan to increase membership or inform states about the ISAC’s benefits. IRS Waived the Data Contribution Requirement for 2017 and Improved Collaboration with Endorsing Organizations IRS officials determined that requiring participating states to contribute data on suspected fraud may be a potential barrier and limit participation in the ISAC. Therefore, IRS waived the data contribution requirement for the first year and one state subsequently contributed data to the ISAC in the 2017 filing season. However, as of October 2017, 5 states had contributed data and 8 states had submitted 29 alerts. In our focus groups, officials from a few states reported they were concerned about the data contribution requirement and were unsure if they had the resources to contribute such data and did not fully understand the terms of the data contribution requirement. IRS officials attribute the low data contribution this year to it taking time to build trust among partners. The ISAC Board sought to reframe the discussion about data contribution and, in July 2017, changed the language to describe data contribution as a data/information opportunity. Endorsing organizations are another potential tool to increase participation in the ISAC. Five trade organizations—American Coalition of Taxpayer Rights, Council for Electronic Revenue Communication Advancement, Computer and Communications Industry Association, the Free File Alliance, and FTA—are supporting the ISAC Partnership as endorsing organizations. According to IRS, endorsing organizations provide additional support for the ISAC concept and are uniquely positioned to serve as links between the ISAC and the sectors they represent. While they are not ISAC members and therefore cannot access the online platform, their role is important to build connections between stakeholders. However, according to FTA officials, IRS did not effectively leverage FTA to communicate the benefits to states during the first year of the pilot, but IRS and the ISAC Board have since taken important steps to improve collaboration. FTA endorsed the ISAC in February 2017 and, in our focus groups, both state and industry officials said the endorsement was important for securing more widespread state participation. According to FTA, IRS did not incorporate its feedback about the probable response from states to the ISAC, which FTA officials believe may have resulted in a lower-than- expected rate of participation by states in the early months of the ISAC. According to IRS officials, IRS attempted to work with endorsing organizations while standing up the ISAC online platform and received comments from FTA and an industry trade organization that reflected different interests and priorities. According to IRS officials, IRS attempted to find a middle ground. More recently, the Board attempted to better engage endorsing organizations by including them in a July 2017 meeting about planning the next steps for the ISAC. Taxpayer Data Safeguards Determine Access to Information Shared in the ISAC IRS, states, and industry partners have all faced data safeguarding challenges to participating in the ISAC. For example, IRS is unable to share taxpayer or record-level data in the ISAC due to the section 6103 safeguards discussed earlier in this report. In a June 2017 report to Congress, the Electronic Tax Administration Advisory Committee (ETAAC) recommended IRS identify, analyze, and mitigate barriers that preclude IRS from sharing information in the ISAC. IRS officials said that IRS not sharing information in the ISAC limits the full benefit of the ISAC. While the ISAC is designed to be a three-pronged collaboration between IRS, states, and industry, because IRS does not view or contribute record-level data, such data only flows between states and industry. This limits the full value of the ISAC. Further, it may be challenging for the ISAC partnership to meet a key goal of increasing participation among state and industry members if a key stakeholder in the partnership is unable to fully participate. IRS officials said the agency is considering options to allow it to participate more fully in the ISAC. Specifically, IRS included a request for a legislative change to section 6103 in a report to Treasury. This request is an important step to enable the ISAC to be an effective information sharing and collaboration tool. Likewise, some states faced legal hurdles to joining the ISAC. According to FTA, while it outlined potential concerns about those hurdles in a memo to state legal counsels, it expected those would be manageable for states. Furthermore, some industry partners face difficulties in accessing the ISAC’s online platform. As previously mentioned and shown in figure 2, tax preparation companies—covered under section 7216 and referred to as 7216 industry partners—have full access to all of the information provided to the ISAC. However, financial institutions—not covered under section 7216 and referred to as non-7216 industry partners—have limited access to information in the ISAC. According to IRS officials, IRS is considering a request from financial institutions to amend regulations under section 7216 to allow them greater access to the ISAC. ISAC Partnership Has Not Developed an Outreach Plan to Improve State and Industry Partners’ Participation In the 2017 filing season, contribution levels from IRS, states, and industry partners varied significantly. While IRS invited states and Security Summit partners to participate, other stakeholders—such as industry partners that are not members of the Security Summit—have not been included. While IRS has taken steps to reach out to state and industry partners, IRS and the ISAC Partnership have opportunities to more fully engage stakeholders. One challenge to state participation is that there has been a disconnect, at times, between the ISAC Board’s and states’ perceptions of how the ISAC can be used to prevent and detect fraud. For example, IRS views the ISAC as the key tool for information sharing between IRS, states, and industry partners in the future. However, officials from all states represented in our focus groups noted that they either had not used, or were unfamiliar with, the ISAC-specific resources—such as the data visualization tools shown previously in figure 4. These are intended to help users identify IDT refund fraud trends more broadly. Moreover, officials from a few states reported IRS already sends more data on suspected fraud through other channels than they can effectively process with their current resources. IRS is working to quantify the benefits of the ISAC, which could help enhance states’ understanding. The ISAC Board is working with IRS’s research organization to quantify the refund fraud averted and federal dollars protected by analyzing Treasury receipts. According to IRS, it is working with ISAC state members to communicate the value of the ISAC to their leadership and share key activities, as appropriate, to enable their continued involvement. IRS and the ISAC Board also took several steps to inform states and members of industry—both members of the ISAC and non-members—about the benefits of the ISAC. For example, IRS’s contractor provided training to users of the ISAC to demonstrate the platform’s functionality and tools. In addition, IRS officials presented information about the ISAC at conferences with tax industry partners. Relatedly, ETAAC recently recommended that IRS encourage greater participation in the ISAC by stakeholders involved in tax administration. In addition to inviting states to join the ISAC, IRS invited industry partners who were members of the Security Summit to join. Security Summit industry partners account for the majority of tax returns IRS accepts using a paid preparer or tax software. The ISAC Board limited industry participation in the ISAC Partnership to Security Summit partners because it was concerned about securely authenticating new members and scaling up the size of the pilot to accommodate additional participants. Furthermore, although three ISAC members are non-7216 financial institutions, IRS does not consider banks or credit unions—both of which cash refund checks—to be fully represented in the ISAC. IRS officials said they were focused on engaging tax preparation companies and building trust among existing stakeholders. In June 2017, ETAAC recommended that IRS should address expanding the participation of financial institutions in the ISAC, as well as in other efforts. Although the ISAC Partnership does not have an outreach plan, such a plan could, for example, address how to expand ISAC membership or the disconnect between the benefits identified by the ISAC Board and how states perceive the ISAC can be used to prevent and detect fraud in their states. According to IRS officials, the ISAC Partnership has not developed a plan yet because it has been focused on other priorities. Project management standards state that when an entity is planning a project—that is, a temporary endeavor to create a unique product, service, or result—it is important to define relevant activities and determine the scope, sequence, and schedule of those activities, among other things. In addition, federal Standards for Internal Control in the Federal Government state that federal agencies should establish plans to help ensure goals and objectives—such as increasing participation in the ISAC—can be met. Additionally, internal control standards state that documentation of agency decisions and activities is important because it provides a means to retain organizational knowledge, mitigate the risk of having that knowledge limited to a few personnel, and communicate that knowledge to external parties, as appropriate. Furthermore, we have reported that without developing a user outreach plan, an agency risks being unable to provide services to its users where they need them most. For the ISAC, this could mean less effective collaboration among stakeholders or missed opportunities to prevent IDT refund fraud. Conclusions IRS has taken important steps to improve its ability to respond to the ongoing challenge of IDT refund fraud. Among these efforts, the ISAC and RRT show promise for increasing information sharing and collaboration among IRS, states, and industry to help detect and prevent IDT refund fraud and coordinate responses to fraud incidents. The ISAC pilot goes beyond existing fraud information sharing efforts and has strengthened collaboration among stakeholders. While IRS has taken actions that partially align with key aspects of five leading practices for effective pilot design, its actions do not fully align with any of the practices. Further, IRS has not developed criteria for assessing whether the pilot’s objectives have been met. Without this assessment and better alignment with leading practices for pilot design, IRS, its partners, and Congress will have difficulty determining the effectiveness of the pilot and whether and when to proceed with full-scale implementation. The benefit of the ISAC can only fully be realized when there is robust participation among stakeholders. However, officials from all states represented in our focus groups noted that they either had not used, or were unfamiliar with, the ISAC-specific resources. Part of the issue is that IRS has not effectively communicated the benefits of the ISAC to states, so they can better understand how the ISAC will help them combat IDT refund fraud. Developing an outreach plan to broaden membership to additional states, non-Security Summit members of industry, and financial institutions would further promote stakeholders collaborating and sharing fraud information. Recommendations for Executive Action We are making the following two recommendations to IRS: The Acting Commissioner of Internal Revenue should ensure that the Information Sharing and Analysis Center (ISAC) pilot better aligns with leading practices for effective pilot design. This should include establishing criteria for assessing whether the pilot’s objectives have been met before making decisions about its scalability and whether, how, and when to when to proceed to full implementation; and developing a data analysis plan that identifies data sources and criteria necessary for effectively evaluating the pilot. (Recommendation 1) The Acting Commissioner of Internal Revenue should ensure that the ISAC Partnership develops an outreach plan to expand membership and improve states’ and industry partners’ understanding of the ISAC’s benefits. (Recommendation 2) Agency Comments and Third-Party Views We provided a draft of this report to IRS and the co-chairs of the ISAC Board for comment. In written comments reproduced in Appendix II, IRS agreed with both recommendations. IRS reported it will be finalizing an Identity Theft Tax Refund Fraud Pilot Management Plan to help it better align the ISAC pilot with leading practices for pilot design. Additionally, IRS reported it will work with the ISAC Board to ensure that the Board develops an outreach plan to expand membership and improve states' and industry partners' understanding of the ISAC's benefits. In an email dated October 27, 2017, the ISAC Board state and industry co-chairs also agreed with both recommendations and provided technical comments which were incorporated, as appropriate. We are sending copies of this report to the Chairmen and Ranking Members of other Senate and House committees and subcommittees that have appropriation, authorization, and oversight responsibilities for IRS. We are also sending copies to the Acting Commissioner of Internal Revenue, the Secretary of the Treasury, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or lucasjudyj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology The objectives of this engagement were to (1) describe actions Security Summit partners are taking to implement an Information Sharing and Analysis Center (ISAC) and a Rapid Response Team (RRT); (2) evaluate the extent to which the ISAC pilot design aligns with leading practices; and (3) identify actions, if any, that the Internal Revenue Service (IRS) could take to improve the ISAC pilot. We selected the ISAC and RRT from among those initiatives identified in the June 2016 IRS Commissioner’s Security Summit Update Report as the focus of our review because of their importance, the potential for a major effect on IDT refund fraud, and the timeline for planned actions. Although the External Leads Process and the Industry Leads Process are discussed in this report, we did not select them for in-depth review. To address all objectives, we reviewed IRS, ISAC Senior Executive Board (Board), ISAC working group, and Information Sharing working group documents. These included meeting minutes, planning documents, the biweekly ISAC dashboard, and IRS’s contractor’s weekly ISAC updates. We also observed a training session IRS’s contractor conducted for new ISAC members and we received a demonstration of the ISAC online platform capabilities, including the visualization tools. (See figure 4.) In addition, we conducted semistructured interviews with IRS, state, and industry co-leads of the ISAC and the Information Sharing working groups; ISAC Board co-chairs; the outreach and metrics ISAC Board subgroups; and trade organizations including the Federation of Tax Administrators and American Coalition of Taxpayer Rights. To further address all objectives, we conducted four focus groups in March and April 2017—two sessions with states and two sessions with industry partners: 1. Five representatives from members of industry that were involved in the ISAC or RRT. 2. Seven representatives from members of industry that were involved in the ISAC or RRT. 3. Six officials from states randomly selected from among those with an official who participated in the ISAC or Information Sharing working groups. 4. Five officials from states randomly selected from among those that had not been involved in either working group. We excluded from our focus group sample those states or industry partners with whom we previously conducted—or planned to conduct— a separate semistructured interview. We asked similar questions for each focus group with some variation between state and industry groups. We recorded and transcribed the focus group sessions for review. We analyzed the focus group transcripts to identify common themes, patterns, and comments. We used these focus group discussions to provide illustrative examples of state and industry perceptions of the benefits and challenges to implementing the ISAC and RRT. However, the responses are non-generalizable and do not reflect opinions of all states or industry partners. Because of concerns about identifying which state and industry partners have been involved in these fraud prevention efforts, we are not identifying the focus group participants or the state officials and industry representatives that we interviewed. To evaluate the extent to which the ISAC aligns with the five leading practices for pilot design, we reviewed our prior work and compared IRS actions against these practices and criteria. Our April 2016 report describes the criteria we developed for evaluating pilot design and the methodology we used to do so. For this work, we evaluated each subcomponent of the leading practices to determine if it met fully, mostly, partially, or not at all with the criteria. Each of those assessments was subsequently verified by another individual. To identify actions, if any, that IRS could take to improve the ISAC pilot, we assessed IRS and the ISAC Board’s efforts to implement the ISAC pilot using internal control standards and performance management standards. We conducted this performance audit from August 2016 to November 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Internal Revenue Service Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, the following staff made key contributions to this report: Joanna Stamatiades, Assistant Director; Melissa King, Analyst-in-Charge; Parul Aggarwal; Amy Bowser; Ann Czapiewski; Robert Gebhart; Layla Moughari; and Cynthia Saunders.
Why GAO Did This Study IRS estimates that fraudsters attempted at least $14.5 billion in IDT tax refund fraud in tax year 2015. Since 2015, GAO's High-Risk List has included IRS's efforts to address IDT refund fraud. Starting with its March 2015 Security Summit, IRS has partnered with state tax administrators and tax preparation companies, among others, on initiatives aimed at better preventing and detecting IDT refund fraud. GAO was asked to examine IRS's efforts to collaborate with these partners. This report, among other things, (1) describes actions taken to implement the ISAC and RRT, (2) evaluates the extent to which the ISAC pilot aligns with leading practices for pilot design, and (3) identifies actions, if any, that IRS could take to improve the ISAC pilot. GAO reviewed planning and other documents on the initiatives. It interviewed IRS and state officials and industry and trade organization representatives, among others involved in the ISAC and RRT. GAO also conducted four non-generalizable focus groups with state and industry partners. What GAO Found The Internal Revenue Service (IRS) launched an Identity Theft Tax Refund Fraud Information Sharing and Analysis Center (ISAC) pilot for the 2017 filing season. It aims to allow IRS, states, and tax preparation industry partners to quickly share information on identity theft (IDT) refund fraud. The ISAC pilot includes two components: an online platform run by IRS to communicate data on suspected fraud, and an ISAC Partnership, a collaborative organization comprised of IRS, states, and industry, which is intended to be the governance structure. As of November 2017, the ISAC had 48 members: 31 states (including full members and those receiving alerts only), 14 tax preparation companies, and 3 financial institutions. In addition, IRS is using a Rapid Response Team (RRT) in partnership with states and industry members to coordinate responses to IDT refund fraud incidents that pose a significant threat within 24 to 72 hours of being discovered. IRS deployed the RRT for six incidents in 2016 and once in 2017. GAO found that the ISAC pilot aligns with key aspects of all five leading practices for effective pilot design GAO previously identified, but none fully. For example, IRS has worked to incorporate stakeholder input, but its message about the ISAC's benefits has not fully reached states. Further, IRS does not have criteria for assessing whether the pilot's objectives have been met. Without this assessment and better alignment with leading practices, IRS, its partners, and Congress will have difficulty determining the effectiveness of the pilot and whether to implement it more broadly. IRS has taken actions to improve the ISAC pilot, but the ISAC Partnership does not have an outreach plan. While the ISAC Senior Executive Board limited industry participation to partners who participated in its Security Summit, the ISAC has obtained support from trade organizations. However, officials from almost all states represented in our focus groups noted that they either had not used, or were unfamiliar with, the ISAC-specific resources. While the ISAC Board has taken steps to engage stakeholders, the ISAC Partnership does not have an outreach plan to increase membership and improve states' and industry partners' understanding of the ISAC's benefits. Without such a plan, less effective collaboration is likely among stakeholders and opportunities to prevent IDT refund fraud may be missed. What GAO Recommends GAO recommends IRS ensure (1) the ISAC better aligns with leading practices for effective pilot design, and (2) the ISAC Partnership develops an outreach plan to expand membership and improve understanding of the ISAC's benefits. IRS and the ISAC Board state and industry co-chairs agreed with the recommendations.
gao_GAO-18-693T
gao_GAO-18-693T_0
Factors Affecting Federal Infrastructure Permitting In our prior work, we identified a range of factors that can affect permitting timeliness and efficiency. For the purposes of this statement, we have categorized the factors into five broad categories: 1) coordination and communication, 2) human capital, 3) collecting and analyzing accurate milestone information, 4) incomplete applications, and 5) significant policy changes. Coordination and Communication Effective coordination and communication between agencies and applicants is a critical factor in an efficient and timely permitting process. Standards for internal control in the federal government call for management to externally communicate the necessary quality information to achieve the entity’s objectives, including by communicating with and obtaining quality information from external parties. We found that better coordination between agencies and applicants could result in more efficient permitting. For example, in our February 2013 review of natural gas pipeline permitting, we reported that virtually all applications for pipeline projects require some level of coordination with one or more federal agencies, as well as others, to satisfy requirements for environmental review. For example, BIA is responsible for, among other things, approving rights of way across lands held in trust for an Indian or Indian tribe and must consult and coordinate with any affected tribe. We have reported on coordination practices that agencies use to streamline the permitting process, including the following. Designating a Lead Coordinating Agency We have found that having a lead agency coordinate efforts of federal, state, and local stakeholders is beneficial to permitting processes. For example, in our February 2013 review on natural gas pipeline permitting, industry representatives and public interest groups told us that the interstate process was more efficient than the intrastate process because in the interstate process FERC was designated the lead agency for the environmental review. Other agencies may also designate lead entities for coordination. For example, in a November 2016 report, we described how BIA had taken steps to form an Indian Energy Service Center that was intended to, among other things, help expedite the permitting process associated with Indian energy development. We recommended that BIA involve other key regulatory agencies in the service center so that it could more effectively act as a lead agency. Establishing Coordinating Agreements among Agencies Establishing coordinating agreements among agencies can streamline the permitting process and reduce time required by routine processes. For example, in our February 2013 review of natural gas pipeline permitting, we reported that FERC and nine other agencies signed an interagency agreement for early coordination of required environmental and historic preservation reviews to encourage the timely development of pipeline projects. Using Mechanisms to Expedite Routine or Less Risky Reviews Agencies can also use mechanisms to streamline reviews of projects that are routine or less environmentally risky. For example, under NEPA, agencies may categorically exclude actions that an agency has found—in NEPA procedures adopted by the agency—do not individually or cumulatively have a significant effect on the human environment and for which, therefore, neither an environmental assessment nor an environmental impact statement is required. Also under NEPA, agencies may rely on “tiering,” in which broader, earlier NEPA reviews are incorporated into subsequent site-specific analyses. Tiering is used to avoid duplication of analysis as a proposed activity moves through the NEPA process, from a broad assessment to a site-specific analysis. Such a mechanism can reduce the number of required agency reviews and shorten the permitting process. Human Capital Agency and industry representatives cited human capital factors as affecting the length of permitting reviews. Such factors include having a sufficient number of experts to review applications. Some examples include: In June 2015 and in November 2016, we reported concerns associated with BIA’s long-standing workforce challenges, such as inadequate staff resources and staff at some offices without the skills needed to effectively review energy-related documents. In November 2016 we recommended that Interior direct BIA to incorporate effective workforce planning standards by assessing critical skills and competencies needed to fulfill BIA’s responsibilities related to energy development. For a September 2014 report, representatives of companies applying for permits to construct LNG export facilities told us that staff shortages at the Pipeline and Hazardous Safety Materials Administration delayed spill modeling necessary for LNG facility reviews. In an August 2013 review of Interior’s Bureau of Land Management (BLM) and oil and gas development, industry representatives told us that BLM offices process applications for permit to drill at different rates, and inadequate BLM staffing in offices with large application workloads are one of the reasons for these different rates. Agencies have taken some actions to mitigate human capital issues. For example, we reported in August 2013 that BLM had created special response teams of 10 to 12 oil and gas staff from across BLM field offices to help process applications for permits to drill in locations that were experiencing dramatic increases in submitted applications. In July 2012, we recommended that Interior instruct two of its bureaus to develop human capital plans to help manage and prepare for human capital issues, such as gaps in critical skills and competencies. Collecting and Analyzing Accurate Milestone Information Our work has shown that a factor that hinders efficiency and timeliness is that agencies often do not track when permitting milestones are achieved, such as the date a project application is submitted or receives final agency approval to determine if they are achieving planned or expected results. In addition, our work has shown that agencies often do not collect accurate information, which prevents them from analyzing their processes in order to improve and streamline them. The following are examples of reports in which we discussed the importance of collecting accurate milestone information: In December 2017, we found that the National Marine Fisheries Service and the U.S. Fish and Wildlife Service were not recording accurate permit milestone dates, so it was not possible to determine whether agencies met statutory review time frames. We recommended that these agencies clarify how and when staff should record review dates so that the agencies could assess the timeliness of reviews. We found in June 2015 that BIA did not have a documented process or the data needed to track its review and response times; to improve the efficiency and transparency of BIA’s review process, we recommended that the agency develop a process to track its review and response times and improve efforts to collect accurate review and response time information. We found in an August 2013 report that BLM did not have complete data on applications for permits to drill, and without accurate data on the time it took to process applications, BLM did not have the information it needed to improve its operations. We recommended that BLM ensure that all key dates associated with the processing of applications for permits to drill are completely and accurately entered into its system to improve the efficiency of the review process. Standards for internal control in the federal government call for management to design control activities to achieve objectives and respond to risks, including by comparing actual performance with planned or expected results and analyzing significant differences. Without tracking performance over time, agencies cannot do so. The standards also call for agency management to use quality information to achieve agency objectives; such information is appropriate, current, complete, accurate, accessible, and provided on a timely basis. As we have found, having quality information on permitting milestones can help agencies identify the duration of the permitting process, analyze process deficiencies, and implement improvements. Incomplete Applications According to agency officials we spoke with and agency documents we reviewed, incomplete applications are a factor that can affect the duration of reviews. For example, in a 2014 BLM budget document, BLM reported that—due to personnel turnover in the oil and gas industry—operators were submitting inconsistent and incomplete applications for permits to drill, which was delaying the approval of permits. In a February 2013 report, officials we spoke with from Army Corps of Engineers district offices said that incomplete applications may delay their review because applicants are given time to revise their application information. Deficiencies within agency IT systems may also result in incomplete applications. As we noted in a July 2012 report, Interior officials told us that their review of oil and gas exploration and development plans was hindered by limitations in its IT system that allowed operators to submit inaccurate or incomplete plans, after which plans were returned to operators for revision or completion. Agencies can reduce the possibility of incomplete applications by encouraging early coordination between the prospective applicant and the permitting agency. According to agency and industry officials we spoke with, early coordination can make the permitting process more efficient. One example of early coordination is FERC’s pre-filing process, in which an applicant may communicate with FERC staff to ensure an application is complete before formally submitting it to the commission. Significant Policy Changes Changes in U.S. policy unrelated to permitting are a factor that can also affect the duration of federal permitting reviews. For example, in September 2014, we reported that the Department of Energy did not approve liquefied natural gas exports to countries without free-trade agreements with the United States for a period of 16 months. We found that the Department stopped approving applications while it conducted a study of the effect of liquefied natural gas exports on the U.S. economy and the national interest. Exporting liquefied natural gas was an economic reversal from the previous decade in which the United States was expected to become an importer of liquefied natural gas. Policy changes can result from unforeseen events. After the Deepwater Horizon incident and oil spill in 2010, Interior strengthened many of its safety requirements and policies to prevent another offshore incident. For example, Interior put new safety requirements in place related to well control, well casing and cementing, and blowout preventers, among other things. In a July 2012 report, we found that after the new safety requirements went into effect, review times for offshore oil and gas drilling permits increased, as did the number of times that Interior returned a permit to an operator. In conclusion, our past reports have identified varied factors that affect the timeliness and efficiencies of federal energy infrastructure permitting reviews. Federal agencies have implemented a number of our recommendations and taken steps to implement more efficient permitting, but several of our recommendations remain open, presenting opportunities to continue to improve permitting processes. Chairmen Palmer and Gianforte, Ranking Members Raskin and Plaskett, and Members of the Subcommittees, this concludes my prepared statement. I would be pleased to answer any questions that you may have at this time. GAO Contacts and Staff Acknowledgments If you or your staff members have any questions concerning this testimony, please contact Frank Rusco, Director, Natural Resources and Environment, who may be reached at (202) 512-3841 or RuscoF@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this testimony include Christine Kehr (Assistant Director), Dave Messman (Analyst-in-Charge), Patrick Bernard, Marissa Dondoe, Quindi Franco, William Gerard, Rich Johnson, Gwen Kirby, Rebecca Makar, Tahra Nichols, Holly Sasso, and Kiki Theodoropoulos. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Why GAO Did This Study Congress recognizes the harmful effects of permitting delays on infrastructure projects and has passed legislation to streamline project reviews and hold agencies accountable. For example, in 2015 Congress passed the Fixing America's Surface Transportation Act, which included provisions streamlining the permitting process. Federal agencies, including the Department of the Interior and FERC, play a critical role by reviewing energy infrastructure projects to ensure they comply with federal statutes and regulations. This testimony discusses factors GAO found that can affect energy infrastructure permitting timeliness and efficiency. To do this work, GAO drew on reports issued from July 2012 to December 2017. GAO reviewed relevant federal laws, regulations, and policies; reviewed and analyzed federal data; and interviewed tribal, federal, state and industry officials, among others. What GAO Found GAO's prior work has found that the timeliness and efficiency of permit reviews may be affected by a range of factors. For the purposes of this testimony, GAO categorized these factors into five categories. Coordination and Communication. GAO found that better coordination between agencies and applicants is a factor that could result in more efficient permitting. Coordination practices that agencies can use to streamline the permitting process include the following: Designating a Lead Coordinating Agency . GAO found having a lead agency to coordinate the efforts of federal, state, and local stakeholders is beneficial to permitting processes. For example, in a February 2013 report on natural gas pipeline permitting, industry representatives and public interest groups told GAO that the interstate process was more efficient than the intrastate process because in the interstate process the Federal Energy Regulatory Commission (FERC) was lead agency for the environmental review. Establishing Coordinating Agreements among Agencies . In the February 2013 report, GAO reported that FERC and nine other agencies signed an interagency agreement for early coordination of required environmental and historic preservation reviews to encourage the timely development of pipeline projects. Human Capital. Agency and industry representatives cited human capital factors as affecting the length of permitting reviews. Such factors include having a sufficient number of experts to review applications. GAO reported in November 2016 on long-standing workforce challenges at the Department of the Interior's Bureau of Indian Affairs (BIA), such as inadequate staff resources and staff at some offices without the skills to effectively conduct such reviews. GAO recommended that Interior incorporate effective workforce planning standards by assessing critical skills and competencies needed to fulfill its responsibilities related to energy development. Interior agreed with this recommendation, and BIA stated that its goal is to develop such standards by the end of fiscal year 2018. Collecting and Analyzing Accurate Milestone Information. GAO's work has shown that a factor that hinders efficiency and timeliness is that agencies often do not track when permitting milestones are achieved, such as the date a project application is submitted or receives final agency approval. Having quality information on permitting milestones can help agencies better analyze process deficiencies and implement improvements. Incomplete Applications. Agency officials and agency documents cited incomplete applications as affecting the duration of reviews. For example, in a 2014 budget document, BLM reported that—due to personnel turnover in the oil and gas industry—operators were submitting inconsistent and incomplete applications for drilling permits, delaying permit approvals. Significant Policy Changes. Policy changes unrelated to permitting can affect permitting time frames. For example, after the 2010 Deepwater Horizon incident and oil spill, Interior issued new safety requirements for offshore drilling. GAO found that review times for offshore oil and gas drilling permits increased after these safety requirements were implemented. What GAO Recommends GAO has made numerous recommendations about ways to improve energy infrastructure permitting processes. Federal agencies have implemented a number of GAO's recommendations and taken steps to implement more efficient permitting, but several of GAO's recommendations remain open, presenting opportunities to continue to improve permitting processes.
gao_GAO-19-90
gao_GAO-19-90_0
Background EEOICPA, as amended, generally provides compensation to employees of Energy under Part B, and under Part E, to its contractors, involved in the production of U.S. nuclear weapons and who developed illnesses related to their exposure to radiation and other toxins at Energy facilities. During and shortly after World War II, the United States sponsored the development, production, and testing of nuclear weapons. It used a network of facilities which eventually expanded into a complex of as many as 365 industrial sites and research laboratories throughout the country that employed more than 600,000 workers. Some of the production sites were owned by Energy or its predecessor agencies, and in many instances contractors managed operations at the facilities. Workers used manufacturing processes that involved handling dangerous materials and were often provided inadequate protection from exposure, although protective measures have increased over time. Because of national security concerns, they also worked under great secrecy, were unknowingly exposed to toxic materials, and often given minimal information about the materials they handled and the potential health consequences of exposure to them. In some cases, the extent of the potential negative effects of the toxins may not have been fully understood at the time of workers’ exposure. EEOICPA, as amended, consists of two compensation programs, Part B and Part E. The Part B program generally provides for $150,000 to eligible current or former employees or their survivors, as well as coverage of future medical expenses associated with certain radiogenic cancer, chronic beryllium disease, and chronic silicosis. Part E provides compensation to current or former contractors, subcontractors, or eligible survivors of up to $250,000 for wage loss and impairment, as well as coverage of medical expenses. Under certain circumstances, eligible claimants may receive compensation under both Part B and Part E. Claims Adjudication and Reopening Claims Under Part E, a contracted Energy employee or survivor can file a compensation claim, typically with a DOL district office (see fig. 1). Once a claim is filed, a DOL claims examiner develops the claim and ultimately recommends its approval or denial. To recommend an approval, the claims examiner must determine that the claimant was a current or former employee of an Energy contractor at a given Energy facility and that they were exposed to a toxic substance at that facility. Additionally, the examiner must find that it is at least as likely as not that the exposure was a significant factor in aggravating, contributing to, or causing a covered illness, and that the exposure was related to employment at an Energy facility. One of the resources used by the claims examiner is the Site Exposure Matrices (SEM), an online database of information on worksites, toxic substances, and associated illnesses. If the claims examiner determines that a claim meets all conditions, he or she recommends that DOL’s Final Adjudication Branch approve the claim. The Final Adjudication Branch then reviews the recommendation and issues a final decision. If the claimant provides new evidence before a final decision is reached, the Final Adjudication Branch may return the claim to the district office for additional development or issue a reversal. DOL provides some assistance to claimants as claims are adjudicated, such as assistance that may be required to develop facts pertinent to the claim, customer service activities, and information available in hard copy and on DOL’s website. However, it is generally the claimant’s responsibility to establish entitlement to compensation under the law. If a claim is denied, claimants are informed of several options, one of which is requesting that DOL reopen the claim. Claims can be reopened any time after the Final Adjudication Branch has issued a final decision, either as a result of a claimant request or agency action (see fig. 2). There is no limit to the number of times a claimant may request a reopening, though the claimant must either submit new evidence or identify a change in a relevant program policy when submitting such a request. Reasons for reopening can include an update to the SEM, new medical evidence, or new evidence of covered employment, among others. Moreover, a claimant may request reopening for each of multiple illnesses or conditions. When a claimant requests a reopening, DOL will review the request and either grant or deny the reopening, depending on DOL’s assessment as to whether there is sufficient evidence to warrant reopening. When a reopening request is granted, DOL vacates the previous final decision and submits the claim for readjudication. In addition, DOL may also reopen groups of related claims. When DOL announces new evidence linking toxins to illnesses, it can also announce plans to reopen groups of claims potentially affected by the new evidence. In these instances, DOL announces the criteria for reopening, which may involve specific substances or worksites, and provides reopening instructions for claims examiners. For example, Circular 15-04, issued in 2014 (now superseded) informed claims examiners that the substance trichloroethylene had been linked to kidney cancer and that previously denied Part E kidney cancer claims could be reopened. DOL officials previously told us that such steps are limited to instances in which a relatively large number of claims are potentially affected. Site Exposure Matrices (SEM) DOL claims examiners use the SEM to help determine workers’ eligibility for Part E compensation. DOL created this web-based database which organizes and communicates information on the toxic substances workers were potentially exposed to at specific Energy worksites, certain buildings at the worksites, and while doing specific jobs at the worksites. As of May 2018, the SEM included information on 16,461 toxic substances and 129 former and current sites. It also cross-references the toxic substances with diseases for which there is an established link. In general, the SEM contains only causal links that are based on epidemiological studies, and for which there is medical and scientific consensus. The SEM provides a basis for exposure information, but is not the sole source of information considered by claims examiners during adjudication (see fig. 3). The SEM is publically available online and continually updated as new exposure data are obtained. According to a 2016 DOL document, there have been at least 656 revisions to the SEM since 2013. New links are primarily drawn from a database of hazardous toxins and associated diseases—known as Haz-Map—formerly maintained by the National Library of Medicine. According to DOL officials, as new links are added to Haz-Map, they are also added to the SEM. In 2010, we reported that DOL’s efforts to update the SEM were not subjected to independent outside review to provide assurance that the SEM is comprehensive and scientifically sound. In 2013, the Institute of Medicine evaluated the scientific rigor of the SEM in response to a request from DOL. Its report noted that some examples of causal links to diseases were missing from the SEM and questioned the SEM’s exclusive dependence on Haz-Map as its source for disease and causal information. The report also identified Haz-Map’s lack of peer review as a key limitation. Specifically, the report noted that Haz-Map lacked adequate oversight or content review by external, independent experts; relied heavily on sources that were not peer-reviewed, such as textbooks; and included references that were not easily accessible and were difficult to check, making quality assurance and technical review difficult. In addition, the report suggested that other sources be considered for inclusion in the SEM. Advisory Board on Toxic Substances and Worker Health By law, the Advisory Board is tasked with providing specific categories of technical advice to the Secretary of Labor regarding Part E of EEOICPA. These categories are: (1) the SEM; (2) medical guidance for claims examiners on weighing the medical evidence of claimants; (3) evidentiary requirements for certain claims related to lung disease; and (4) the work of certain experts, namely, industrial hygienists and consulting physicians and their reports. The Advisory Board has subcommittees aligned with these categories (see fig. 4). The Advisory Board charter provides for 12 to15 members and for 2-year terms for these members. Furthermore, applicable provisions of the Federal Advisory Committee Act’s implementing regulations require that Advisory Board membership be fairly balanced. Accordingly, its members have included representatives of the medical, scientific, and claimant communities. The Advisory Board is authorized until 2024. Office of the Ombudsman for EEOICPA The Office of the Ombudsman for EEOICPA is an independent office within DOL. It was established by the National Defense Authorization Act of 2005, to provide information to address the concerns of claimants and potential claimants relating to EEOICPA, among other responsibilities. The Office of the Ombudsman submits an annual report to Congress that summarizes the number and types of complaints, grievances, and requests for assistance that it has received during the year. The report also includes an assessment of the most common difficulties encountered by claimants and potential claimants each year. The Secretary of Labor is required to provide a written response and must agree or disagree with specific issues raised in the report. In addition, the Office of the Ombudsman hosts and attends outreach events to assist claimants. The Office of the Ombudsman may not make decisions on claims nor act as an advocate for claimants. DOL Reopened Thousands of Claims Since 2012 and Approved Almost 70 Percent, but Some Claimants Faced Evidentiary Challenges DOL Reopened More Than 7,000 Claims by Contracted Employees for Exposure to Toxins at Energy Worksites and Approved Most Based on the most recently reopened claims from calendar years 2012 through 2017, DOL reopened more than 7,000 claims filed by contracted Energy employees. DOL subsequently approved compensation for 69 percent. The remaining claims were denied (13 percent), still awaiting a final decision (2 percent), closed (2 percent), deferred (less than 1 percent) or had some other outcome (15 percent). (See fig. 5). Claims with other outcomes refer to claims for which at least one claimed illness was approved while the others were denied or deferred. Among those more than 7,000 claims, DOL initiated most of the reopenings (80 percent) itself, with fewer reopenings initiated by claimants. Regardless of a claim’s previous status of approved or denied, outcomes after reopening varied by who initiated the reopening. A higher percentage of reopenings initiated by DOL were approved (73 percent, or 4,236 of 5,831 claims) than reopenings initiated by claimants (53 percent, or 758 of 1,432 claims). (See table 1.) Officials at DOL and the Office of the Ombudsman said that DOL-initiated reopenings are more likely to be approved because, in deciding to reopen claims, DOL had already determined there was sufficient evidence to warrant reopening. In addition, DOL-initiated reopenings primarily involve large groups of claims, according to DOL officials. They said that many DOL-initiated reopenings are triggered by the establishment of cohorts of claims for radiation-related cancer or by DOL bulletins or circulars about new evidence linking toxins and specific illnesses at Energy worksites. (For a list of DOL bulletins and circulars associated with reopenings, see app. II.) In these situations, DOL officials said claims examiners manually review all previously denied claims that could be affected. Of the more than 7,000 reopened claims for contracted Energy employees from 2012 through 2017, more than 6,000 had been previously denied versus receiving another outcome. When reopened, whether initiated by DOL or claimants, most (70 percent, or 4,307) were approved (see table 2). In addition, as with all claims, a higher percentage of previously denied claims were approved (75 percent) if reopened at DOL’s initiative compared to those reopened at claimants’ initiative (52 percent). Reasons Reopened Claims Were Denied Included Missing Linkage between Toxin and Illness and Insufficient Medical Evidence DOL officials provided data showing that most of the claims reopened from 2012 through 2017 that were subsequently denied compensation had common reasons, including insufficient medical evidence, ineligible survivors, or maximum benefits already met (see table 3). Some Claimants Faced Challenges in Understanding What Evidence Was Required to Reopen Their Claim According to Office of the Ombudsman officials, some claims may have been denied as a result of claimants not understanding the evidence required for a reopening. These officials also said that claimants experience ongoing challenges at different stages of the adjudication process, including reopening, with regard to evidence required to support their claim. In the 2015 Annual Report to Congress, the Ombudsman noted claimants’ concerns about the reopening process. In particular, the Ombudsman found that DOL’s written communication with claimants requesting additional evidence or informing them of the final decision did not clearly explain what specific evidence was needed or why previously submitted evidence was deemed insufficient. In its 2016 annual report, while the Office of the Ombudsman acknowledged DOL’s efforts to ensure that decisions on claims are adequately reasoned and documented, and found that some recently issued decisions show improvement, it also found some variation in decision quality among claims examiners. Furthermore, consistent with its 2015 report, it also found that some claimants encounter challenges during the reopening process with written communication that is not clear on the evidence needed to reopen a claim. Our prior work also found deficiencies in the quality of a sample of DOL’s written communication with claimants and recommended that all claimant correspondence for Recommended and Final Decisions receive supervisory review. In that report, we noted that DOL’s own monitoring also indicated that some of the letters were not always clear about the evidence needed. Moreover, a recent review by DOL’s Office of the Solicitor of 77 denied reopening requests found shortcomings in the quality of some decision letters. These included the lack of a clear explanation for the denial, discussion of medical evidence submitted by the claimant, and discussion of why evidence submitted by the claimant was considered insufficient to warrant a reopening. Office of the Ombudsman officials told us that some claimants resubmit the same evidence they provided previously. This is due, in part, to claims examiners not acknowledging that they received and reviewed evidence when it was initially submitted, or to decision letters not explaining why the evidence submitted was not sufficient, according to Ombudsman officials. Consequently, claimants do not know what specific additional evidence may be needed and their claims may not be reopened and/or approved for compensation, these officials said. Failure to establish causation between exposure and illness and insufficient medical evidence are the two most common reasons why claimant-initiated reopenings are denied. In its written response to the 2015 report by the Office of the Ombudsman, DOL stated it was undertaking a review of its website and printed material to improve communication with claimants. DOL also stated that in 2015 it began providing training to claims examiners to improve the quality of written letters to claimants, including better explanation of what additional evidence would be needed to reopen a claim. DOL stated that improved communication would address claimants’ confusion and would allow staff to serve claimants on specific issues. As of July 2018, DOL officials said they have taken a number of steps to assist claimants and improve communication with them. For example, DOL conducts workshops for claimants’ Authorized Representatives covering such topics as the evidence needed to support a claim and how to request a reopening. DOL officials also said, in 2016, program officials visited all district offices to provide training on topics such as writing effective letters using reader-friendly language. Officials said that they continually review printed material and are currently updating the website to provide more concise information on the claims process, including how to request reopening of a claim. In addition, DOL officials stated that they recently hired a training analyst to update claimant resources posted to the website and to develop additional training for claims examiners. Officials said that the analyst will also develop a methodology for assessing the effectiveness of the training. Assessing the effectiveness of training represents an opportunity for DOL to address claimants’ concerns about the clarity of written correspondence they receive on claim evidence. According to Standards for Internal Control in the Federal Government, management should conduct ongoing monitoring and externally communicate the necessary quality information to achieve the entity’s objectives. These standards also require management to periodically evaluate its methods of communication so that it has the appropriate tools to communicate quality information. In addition, the EEOICPA Procedure Manual states that claims examiners must ensure that written decisions are clear, concise, and well-written with language that clearly communicates the necessary information. An assessment of DOL’s training which considers claimant concerns could help DOL better understand why some claimants remain confused about the reopening process and do not submit evidence key to supporting their claim. Until then, the agency will be unable to determine whether its training has resulted in improving communication with claimants and to target future training resources effectively. DOL Has Not Fully Implemented Advisory Board Recommendation to Enhance Database Used to Support Claims The Advisory Board in 2016 and 2018 recommended DOL incorporate additional, peer-reviewed data sources on the links between toxic substances and illnesses catalogued in the SEM, but while DOL previously agreed that doing so would be useful, it has not yet added all the sources recommended by the Advisory Board. According to Advisory Board members, incorporating these additional sources would enhance the SEM by making it more comprehensive and scientifically sound. The Advisory Board’s work on the SEM began at its first meeting in April 2016 with the creation of a subcommittee on the SEM (see fig. 6). The subcommittee reviewed the scientific soundness of the SEM and in October 2016 the Advisory Board provided one of two related recommendations to DOL that addressed the scientific soundness of the SEM’s data on toxic substances and diseases. At its October 2016 meeting, the Advisory Board recommended DOL incorporate 13 additional information sources created by other agencies or entities into the SEM. This recommendation was consistent with the Institute of Medicine’s recommendation to DOL in its 2013 report on the SEM. In September 2017, DOL responded to this recommendation, noting that certain additional sources identified by the Institute of Medicine might be useful. In its response, DOL asked the Advisory Board to narrow its list of 13 databases to those that would be most relevant, noting that DOL found that some of these sources were not relevant to occupational exposure, were redundant, or contradicted other sources. DOL also requested the Advisory Board’s advice on how the recommended sources could be used in the SEM. In January 2018, the Advisory Board made its second recommendation regarding the scientific soundness of the SEM’s data on toxic substances and specific diseases by identifying three priority information sources from the 13 originally recommended in October 2016 (see table 4). According to DOL, the Haz- Map has included one of these three sources—the monographs on human carcinogens of the International Agency for Research on Cancer—since the Haz-Map was first published in 2002, and included in the SEM since approximately 2006. According to DOL, the International Agency for Research on Cancer is recognized as the world’s most authoritative resource for information on human carcinogens and an important source of information for populating health effect data in SEM, given its assembled expertise and the scientific veracity of its publications. Its incorporation in the SEM has prompted reopenings of affected claims. DOL officials said Advisory Board members may have been unaware of this earlier incorporation of data in the SEM. In its response to DOL, however, the Advisory Board stated that it continued to believe that incorporation of all of the information sources originally recommended by the Institute of Medicine would be useful. The Advisory Board’s recommendations on incorporating additional peer- reviewed information sources in the SEM were consistent with the earlier report of the Institute of Medicine, which found that these additional data sources generally follow a systematic methodology, reflect peer review, provide more information on linkages between toxic substances and specific diseases, and could enhance the scientific soundness of the SEM. The three information sources that the Advisory Board recommended for inclusion in the SEM in January 2018 provide information on toxic substances and their health effects, and all are peer-reviewed. The Environmental Protection Agency’s Integrated Risk Information System contains information on 511 chemicals and provides fundamental scientific information used to develop human health risk assessments. The National Toxicology Program’s Report on Carcinogens currently lists 248 substances, agents, and mixtures that are known or reasonably anticipated to cause cancer in humans. The International Agency for Research on Cancer, part of the World Health Organization, is considered the authoritative source for information on cancer, according to officials of the National Academies of Sciences, Engineering, and Medicine. In August 2018, DOL responded to the Advisory Board’s recommendation regarding these three potential additional data sources. DOL’s response noted that it uses relevant data from the International Agency for Research on Cancer in claims adjudication, including updates to these data. Regarding the other two data sources, however, DOL declined the recommendation. While noting that these two sources include voluminous and complex data, DOL also noted that the Advisory Board did not offer its own analyses of either the credibility or the scientific reliability of the materials in these databases, and DOL did not think it appropriate to add the databases’ information on health effects to the SEM in the absence of any rigorous and comprehensive investigations by the Advisory Board. DOL’s response also noted that it would consider additional input should the Advisory Board be in a position to offer more specific guidance regarding the content of data sources that would be applicable and appropriate to the administration of the program. Conclusions Contracted Energy employees who carried out the nation’s nuclear weapons production were often unaware of the extreme personal hazards they faced while serving their nation and learned of the risk only when they were later stricken by illness caused by exposure to toxins. It is imperative their claims for compensation be given the attention and care needed to fairly administer this compensation program. The most scientifically up-to-date information should be used to determine the health effects of various toxic substances, and claimants should be assisted in their efforts to meet statutory requirements for claims. Despite DOL efforts to improve the quality of written communication to claimants, some claimants continue to be confused about the evidence needed to successfully reopen and support their claim. DOL letters that clearly communicate what evidence is needed to support a claim could provide claimants with the opportunity to better understand the reopening process while minimizing the frustration of having their claim repeatedly denied and assuring a fair consideration of such claims. Recommendation for Executive Action We are making one recommendation: The Secretary of Labor, in conducting any assessment of its staff training designed to improve clarity of communication with claimants, should ensure that the assessment considers claimants’ challenges with understanding DOL’s written communications on the evidence needed to successfully reopen or otherwise support a claim. Agency Comments We provided a draft of this product to the Department of Labor (DOL) for comment. In its comments, reproduced in appendix III, DOL neither agreed nor disagreed with our recommendation to ensure that the assessment of staff training considers claimants’ challenges regarding the evidence needed to successfully reopen or otherwise support a claim. However, DOL acknowledged that it plans to focus its staff training efforts on a variety of needed training topics, including improving the quality of written communications. DOL further noted that its recently hired training analyst will be responsible for, among other things, designing assessment measures to gauge the quality of training and the effect it has improving the overall quality of claim outcomes. We continue to encourage DOL to design its assessment so that it considers claimants’ challenges in understanding the evidence needed. DOL also provided technical comments, which we incorporated as appropriate. In addition, we provided relevant report sections to the Office of the Ombudsman, members of the Subcommittee on the Site Exposure Matrices of the Advisory Board on Toxic Substances and Worker Health, and officials of the National Academies of Sciences, Engineering, and Medicine for their technical comments and incorporated them, as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to the appropriate congressional committees; the Secretary of Labor; and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or gurkinc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology We examined (1) the number of compensation claims for illnesses resulting from exposure to toxins that were reopened by the Department of Labor (DOL) and their final outcome; (2) the extent to which an advisory board on toxic substances and worker health reviewed and advised DOL on the scientific soundness of DOL’s database on toxins and their potential links to occupational diseases, and DOL’s response. To address our objectives, we: 1. Reviewed relevant federal laws, regulations and guidance; 2. Requested summary data from 2012 to 2017 from DOL related to the reopening process, including claims assessed for reopening, claims actually reopened, and outcomes for reopened claims and, for claims denied after being reopened, the reasons for denial; 3. Reviewed DOL program documents; 4. Reviewed recommendations of the Advisory Board on Toxic Substances and Worker Health (Advisory Board) submitted to DOL from October 2016 to January 2018, and DOL’s responses to those recommendations, as well as Advisory Board minutes and other documentation; 5. Interviewed DOL officials; members of the Advisory Board on Toxic Substances and Worker Health; officials of the National Academies of Sciences, Engineering, and Medicine; and a representative of an advocacy group. We conducted this performance audit from September 2017 to November 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Review of Federal Laws, Regulations, and Guidance We reviewed relevant federal laws, including the Energy Employees Occupational Illness Compensation Program Act of 2000 (EEOICPA), the National Defense Authorization Act of 2015, National Defense Authorization Act for Fiscal Year 2005, and the Federal Advisory Committee Act, as well as relevant federal regulations. In addition, we reviewed relevant guidance, including the Federal Energy Employees Occupational Illness Compensation Program Act Procedure Manual, as well as relevant Energy Employees Occupational Illness Compensation Program Act Bulletins and Circulars. Analysis of DOL Data on Reopened Claims and Subsequent Decisions To address our first objective, we obtained and analyzed data from DOL’s Energy Compensation System from January 1, 2012 through December 31, 2017. We selected 2012 as the first year of our review period because the program transitioned to a new data system that year, and 2017 as the last year to obtain the most recent data available at the time of our review. We obtained and analyzed data for the following types of claims: Claims reviewed for reopening. We analyzed the data DOL provided on claims that it reviewed for reopening, that is, claimant requests for reopening (claimant-initiated reopenings), and claims identified by DOL for potential reopening (DOL-initiated reopenings). The total claims DOL reviewed for reopening was 10,652. All claims actually reopened: We obtained the aggregate number of all claims that were reopened. These claims totaled 8,234. We also obtained data for each individual claim, including reopening request date, reopening request type, reopening date, original final decision type, and outcome type. The reopening request type indicates whether the claim was claimant- or agency-initiated. The original final decision type refers to the final decision when the claim was originally adjudicated. The outcome type refers to the subsequent final decision following reopening. Most recently reopened claims: As we did for all reopened claims, we obtained aggregate data on all the most recently reopened claims. These claims totaled 7,263. By using the most recently reopened claims, we were able to examine one claim for each claimant, to provide a consistent unit of analysis, given that claimants can have multiple claims at one time, and there is no limit on the number of times they can request reopening of their claims. We also obtained data on each individual claim that included the same categories as those listed above for all reopened claims. We assessed the reliability of the data obtained by (1) reviewing existing information about the data and the system that produced them, and (2) interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for purposes of providing information on the number of claims for illnesses resulting from exposure to toxins that DOL reopened since 2012 and the outcome. However, there was one limitation to the data obtained: according to DOL officials, the Energy Compensation System does not allow a particular final decision to be linked to a particular reopened claim, given that claims may be reopened multiple times and may be filed for multiple conditions. As a result, DOL officials queried the system to match the final decision issued most recently after the reopening as the basis of the provided data. DOL officials explained that the data system’s codes used to record final decisions do not reflect the full complexity of a case, and reflect the fact that claims may be filed for multiple conditions. To illustrate this, figure 7 depicts a hypothetical example of a claimant requesting reopening of claims for three conditions (emphysema, hearing loss, and bladder cancer) that had been denied previously. The code assigned to the final decision, although appropriate, does not reflect the full complexity of the claims’ history. In the example below, given that there were three initial reopening requests for different conditions, a new reopening request for one of these conditions (hearing loss), and two subsequent final decisions, it is unclear from the coding in DOL’s system which final decision corresponds to which reopening request. We reviewed DOL summary tables on claims data to analyze the most recently reopened claims from January 1, 2012 through December 31, 2017. To assess the outcomes of these claims, we examined both the initial and subsequent final decisions. We first grouped DOL final decisions into categories (see table 4). We decided to develop an “Other” category so that claims with both approvals and denials would be grouped together. Claimants can have multiple medical conditions and when they receive a final decision, some medical conditions may be approved while others are denied. Claims with such mixed outcomes are coded in the Energy Compensation System as “Approved and Denied Only” or “Approved, Denied and Deferred Only.” The code “Approved, Denied and Deferred Only” refers to claims where a final decision has been rendered on claims for some illnesses—approving at least one and denying at least one—while a decision for at least one other claimed illness is deferred for further development until it is ready for a final decision. We then analyzed the initial and the subsequent final decisions. To address our first objective, we reviewed certain program documents. Specifically, we reviewed selected Accountability Reviews, which are conducted by the Division of Energy Employees Occupational Illness Compensation to monitor the quality of claims adjudication. According to program officials, these reviews serve as a quality control tool and regularly examine whether decisions on claims were supported as well as issues such as payment accuracy. They may also occasionally include other issues, including issues related to the reopening process. In addition, we reviewed a review of reopening requests that were denied conducted by the DOL Office of the Solicitor in 2017. Additionally, we reviewed information related to reopened claims in the annual reports of the Office of the Ombudsman for calendar years 2012 through 2015, and DOL’s responses to the reports for calendar years 2013 through 2015. Review of Advisory Board Recommendations, DOL Responses, and Other Documents To address our second objective, we reviewed all recommendations that the Advisory Board made to DOL about the Energy Employees Occupational Illness Compensation Program Act of 2000, in order to identify those recommendations related to the scientific soundness of the Site Exposure Matrices (SEM), and DOL’s responses to these recommendations. Specifically, we reviewed the eight recommendations made by the Advisory Board in October 2016, and DOL’s response in November 2017; the three recommendations made by the Advisory Board in June 2017, and DOL’s response in March 2018; the seven overarching recommendations made by the Advisory Board in April 2017, and DOL’s response in September 2017; and the ten recommendations made by the Advisory Board in January 2018, all of which referred back to previous recommendations, in some cases revising the previous recommendation. We also reviewed DOL’s responses to these recommendations in August 2018. In addition, we reviewed the Advisory Board’s charter and minutes from selected meetings of the full Advisory Board and from the Subcommittee on the Site Exposure Matrices. In addition, in order to understand the Advisory Board’s recommendations about the Site Exposure Matrices, we reviewed a report on the scientific rigor of the SEM, Review of the Department of Labor’s Site Exposure Matrix Database (Washington, D.C.: The National Academies Press, 2013). DOL asked the Institute of Medicine to review the SEM database and its underlying source of toxic substance–occupational disease links. To review the SEM, the Institute of Medicine formed an ad hoc committee of experts in occupational medicine, toxicology, epidemiology, industrial hygiene, public health, and biostatistics, who conducted an 18-month study to review the scientific rigor of the SEM. To address both objectives, we interviewed DOL officials and others with relevant knowledge or experience of the Energy Employees Occupational Illness Compensation Program Act of 2000. Specifically, we interviewed officials of DOL’s Division of Energy Employees Occupational Illness Compensation about topics including the reopening process, how data about reopened claims are stored in the information system, reviews of specific reopened claims, and DOL’s response to recommendations of the Advisory Board. We also interviewed officials of DOL’s Office of the Ombudsman for EEOICPA about topics such as claimants’ concerns about the reopening process and about the SEM. In addition, we interviewed officials of the National Academies of Sciences, Engineering, and Medicine, who facilitated the work of the committee that produced the report, Review of the Department of Labor’s Site Exposure Matrix. We asked the officials about topics such as the process used to recruit experts for the review, the report’s methodology, the report’s approach to scientific rigor, and the report’s recommendations. Additionally, we interviewed members of the Advisory Board on Toxic Substances and Worker Health’s Subcommittee on the Site Exposure Matrices, who represent the medical, scientific, and claimant communities. We asked the Advisory Board members about topics such as their review of the SEM and the priorities, if any, that they considered in doing so; their approach to scientific rigor and scientific soundness; and their recommendations to DOL. Finally, we interviewed a representative of the Alliance of Nuclear Workers Advocacy Groups about topics that included the challenges, if any, that claimants experience regarding reopened claims and use of the SEM, and the Advisory Board’s recommendations to DOL. Appendix II: List of Department of Labor Bulletins and Circulars About Reopenings of Energy Employees Part E Claims Energy Employees Occupational Illness Compensation Program Act Bulletins Associated with Part E Reopenings 1. Department of Labor, EEOICPA Bulletin 12-01, Chronic Lymphocytic Leukemia (CLL) as Radiogenic Cancer under the Energy Employees Occupational Illness Compensation Program Act (EEOICPA), March 7, 2012. 2. Department of Labor, EEOICPA Bulletin 13-02, Systematic Review of Denied Part E Cases, February 21, 2013. 3. Department of Labor, EEOICPA Bulletin 16-01, Criteria for Establishing Causation for Asthma Claims Under Part E of the Energy Employees Occupational Illness Compensation Program Act (EEOICPA), s, October 26, 2015. 4. Department of Labor, EEOICPA Bulletin 16-02, Presumptions Available for Accepting Chronic Obstructive Pulmonary Disease (COPD) Under Part E of the Energy Employees Occupational Illness Compensation Program Act, December 28, 2015. 5. Department of Labor, EEOICPA Bulletin 16-03, Instructions for Use of the Direct Disease Linked Work Processes (DDLWP) in the Site Exposure Matrices (SEM) under Part E of the Energy Employees Occupational Illness Compensation Program Act (EEOICPA), July 11, 2016. Energy Employees Occupational Illness Compensation Program Act Circulars Associated with Part E Reopenings 1. Department of Labor, EEOICPA Circular 13-06, Review of Denied Bladder Cancer Cases under Part E. (Superseded by Procedure Manual Chapter 15), February 21, 2013. 2. Department of Labor, EEOICPA Circular 13-12, Review of Denied Ovarian Cancer Cases under Part E. (Superseded by Procedure Manual Chapter 15), August 29, 2013. 3. Department of Labor, EEOICPA Circular 15-04, Review of Cases Involving Exposure to TCE and the Development of Kidney Cancer. (Superseded by Procedure Manual Chapter 15), November 1, 2014. 4. Department of Labor, EEOICPA Circular 15-05, Occupational Exposure Guidance Relating to Asbestos. (Superseded by Procedure Manual Chapter 15), December 17, 2014. 5. Department of Labor, EEOICPA Circular 17-04, Rescind Post 1995 Toxic Exposure Guidance, February 2, 2017. 6. Department of Labor, EEOICPA Circular 18-01, Idiopathic Disease Diagnosis, December 6, 2017. Appendix III: Comments from the Department of Labor Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Meeta Engle (Assistant Director), Chris Morehouse (Analyst-In-Charge), and LaToya King made key contributions to this report. Also contributing to this report were Susan Aschoff, James Bennett, Joseph Cook, Sheila R. McCoy, Jean McSween, Alex Galuten, David Perkins, Tim Persons, Benjamin Sinoff, Almeta Spencer, and Jerome Sandau. Related GAO Products Energy Employees Compensation: DOL Generally Followed Its Procedures to Process Claims but Could Strengthen Some Internal Controls. GAO-16-74. Washington, D.C.: March 10, 2016. Energy Employees Compensation: Additional Independent Oversight and Transparency Would Improve Program’s Credibility. GAO-10-302. Washington, D.C.: March 22, 2010. Energy Employees Compensation: Actions to Promote Contract Oversight, Transparency of Labor’s Involvement, and Independence of Advisory Board Could Strengthen Program. GAO-08-4. Washington, D.C.: October 26, 2007. Energy Employees Compensation: Adjustments Made to Contracted Review Process, But Additional Oversight and Planning Would Aid the Advisory Board in Meeting Its Statutory Responsibilities. GAO-06-177. Washington, D.C.: February 10, 2006.
Why GAO Did This Study For decades, Energy, its predecessor agencies, and contractors employed thousands of employees in hazardous work in nuclear weapons production, exposing many employees to toxic substances. The Energy Employees Occupational Illness Compensation Program, administered by DOL, provides compensation for illnesses linked to exposures. Since 2004, DOL has provided about $4.4 billion to eligible employees and their survivors. GAO was asked to review aspects of the claims process for contracted employees. GAO examined (1) the number and outcome of compensation claims for illnesses resulting from exposure to toxins that DOL has reopened since 2012, and (2) the Advisory Board's advice to DOL on the scientific soundness of its database on toxins and illnesses, and DOL's responses. GAO analyzed DOL claims data for 2012—when a new data system was introduced— through 2017 and assessed their reliability. GAO reviewed relevant federal laws and DOL procedures, and Advisory Board documents and interviewed DOL officials, Advisory Board members, experts, and a claimant advocate. What GAO Found The Department of Labor (DOL), from 2012 through 2017, reopened more than 7,000 compensation claims by contracted workers with illnesses resulting from exposure to toxins at Department of Energy (Energy) worksites. Of these reopened claims, 69 percent were approved for compensation (see figure). Claims can be reopened for various reasons, including new information on toxic substances and associated illnesses or new evidence provided by a claimant. According to DOL's Office of the Ombudsman officials, some claims may have been denied as a result of claimants not understanding the evidence required to support their claim. Moreover, the Ombudsman's two most recent reports in 2015 and 2016 found DOL's letters to claimants requesting additional evidence or informing them of the final decision did not clearly explain the specific evidence needed or why previously submitted evidence was deemed insufficient. GAO's previous work also found deficiencies in the quality of a sample of DOL's written communication with claimants. DOL has provided training to claims examiners on how to write clearly in correspondence and plans to assess the training. The assessment is an opportunity for DOL to better understand why some claimants remain confused about needed evidence and could help DOL target its training resources more effectively. The Advisory Board on Toxic Substances and Worker Health (Advisory Board) recommended in 2016 and 2018 that DOL incorporate additional sources of information on toxic substances and associated illnesses into the database it uses to help determine eligibility for claims compensation. While DOL noted that certain additional data sources might be useful, it has not added all of the recommended data sources. The Advisory Board was created to provide technical advice to DOL on its database, among other things. What GAO Recommends GAO recommends that DOL ensure any assessment of its staff training efforts considers claimants' challenges with understanding DOL's communications on evidence for claims. DOL neither agreed nor disagreed with the recommendation except to note that it plans to focus its training on such topics as quality of written communications and assess its training efforts.
gao_GAO-19-116
gao_GAO-19-116_0
Background U.S. Missions in Afghanistan Since 2001, the United States has made a commitment to building Afghanistan’s security and governance in order to prevent the country from once again becoming a sanctuary for terrorists. To achieve its security objectives, the United States currently has two missions in Afghanistan: a counterterrorism mission that it leads and the NATO-led Resolute Support train, advise, and assist mission, which it participates in with other coalition nations. The objective of Resolute Support, according to DOD reporting, is to establish self-sustaining Afghan security ministries and forces that work together to maintain security in Afghanistan. The United States is conducting these missions within a challenging security environment that has deteriorated since the January 2015 transition to Afghan-led security. The United Nations reported nearly 24,000 security incidents in Afghanistan in 2017—the most ever recorded—and, despite a slight decrease in the overall number of security incidents in early 2018, the United Nations noted significant security challenges, including a spike in high-casualty attacks in urban areas and coordinated attacks by the insurgency on ANDSF checkpoints. DOD provides both personnel and funding to support its efforts in Afghanistan. DOD documents indicate that the United States contributes more troops to Resolute Support than any other coalition nation. As of May 2018, the United States was contributing 54 percent of Resolute Support military personnel, according to DOD reporting. Of the approximately 14,000 U.S. military personnel in Afghanistan as of June 2018, about 8,500 were assigned to Resolute Support to train, advise, and assist the ANDSF, according to DOD reporting. For fiscal year 2018, Congress appropriated about $4.67 billion for the Afghanistan Security Forces Fund—the primary mechanism of U.S. financial support for manning, training, and equipping the ANDSF. Other international donors provided about $800 million, and the Afghan government committed to providing about $500 million, according to DOD reporting. Under Resolute Support and the International Security Assistance Force mission that preceded it, CSTC-A is the DOD organization responsible for (1) overseeing efforts to equip and train the ANA and ANP; (2) validating requirements, including equipment requirements; (3) validating existing supply levels; (4) submitting requests to DOD components to contract for procurement of materiel for the ANDSF; and (5) ensuring that the Afghan government appropriately uses and accounts for U.S. funds provided as direct contributions from the Afghanistan Security Forces Fund. OSD-P is responsible for developing policy on and conducting oversight of the bilateral security relationship with Afghanistan focused on efforts to develop the Afghan security ministries and their forces. U.S.-Purchased Equipment for the ANDSF In August 2017, we reported that the United States had spent almost $18 billion on equipment and transportation for the ANDSF from fiscal years 2005 through April 2017, representing the second-largest expenditure category from the Afghanistan Security Forces Fund. In that report, we identified six types of key equipment the United States funded for the ANDSF in fiscal years 2003 through 2016, including approximately: 600,000 weapons, such as rifles, machine guns, grenade launchers, shotguns, and pistols; 163,000 tactical and nontactical radios, such as handheld radios and 76,000 vehicles, such as Humvees, trucks, recovery vehicles, and mine resistant ambush protected vehicles; 30,000 equipment items for detecting and disposing of explosives, such as bomb disposal robots and mine detectors; 16,000 equipment items for intelligence, surveillance, and reconnaissance, such as unmanned surveillance drones and night vision devices; and 208 aircraft, such as helicopters, light attack aircraft, and cargo airplanes. ANDSF Organization and Force Levels The Ministry of Defense oversees the ANA, and the Ministry of the Interior oversees the ANP. According to DOD reporting, the authorized force level for the ANDSF, excluding civilians, as of June 2018 was 352,000: 227,374 for the Ministry of Defense and 124,626 for the Ministry of Interior. The ANA includes the ANA corps, Afghan Air Force, Special Mission Wing, ANA Special Operations Command, and Ktah Khas (counterterrorism forces). The ANP includes the Afghan Uniformed Police, Afghan Anti-Crime Police, Afghan Border Police, Public Security Police, Counter Narcotics Police of Afghanistan, and General Command of Police Special Units. The ANA Special Mission Wing, Ktah Khas, ANA Special Operations Command, and ANP General Command of Police Special Units are collectively referred to as the Afghan Special Security Forces. In this report, we refer to the Afghan Air Force and the Afghan Special Security Forces as specialized forces, and the other components of the ANDSF as conventional forces. According to DOD reporting, the combined authorized force level for the specialized forces as of June 2018 was approximately 34,500, or about 10 percent of the ANDSF’s total authorized force level of 352,000, compared with the conventional forces, which make up about 74 percent of the total authorized force level for the ANDSF. Figure 1 shows the ANDSF’s organization. Resolute Support Advising Strategy and Goals U.S. and coalition advisors from Resolute Support focus on capacity building at the Ministry of Defense, Ministry of Interior, and ANDSF regional headquarters, according to DOD reporting. Ministerial advisors are located at Resolute Support headquarters in Kabul. At the ministerial level, advisors provide assistance to improve institutional capabilities, focusing on several functional areas. Table 1 summarizes the indicators of effectiveness that ministerial advisors are to use to measure ministerial progress in developing functioning systems that can effectively execute each of the functional areas. Regional Resolute Support advisors from seven advising centers located throughout Afghanistan provide support to nearby ANA corps and ANP zone headquarters personnel, according to DOD reporting. Some advisors are embedded with their ANDSF counterparts, providing a continuous coalition presence, while others provide less frequent support, based on proximity to and capability of their ANDSF counterparts. Regional advisors are to track ANDSF capability development by assessing the progress of the ANA corps and ANP zone headquarters based on five capability pillars (see table 2). DOD and other Resolute Support advisors are to document the results of these assessments each quarter in an ANDSF Assessment Report. According to DOD reporting, in addition to ministerial and regional advising, two tactical-level advisory commands provide continuous support for the ANDSF’s specialized forces: Train, Advise, and Assist Command–Air (TAAC-Air) advises the Afghan Air Force down to the unit level, and NATO Special Operations Component Command–Afghanistan (NSOCC-A) primarily provides tactical-level special operations advising for the Afghan Special Security Forces. TAAC-Air and NSOCC-A assess capabilities at the headquarters level based on the five capability pillars described above in table 2, and these assessments are included in the quarterly ANDSF Assessment Report. Figure 2 shows the levels of advising each Resolute Support advisory command type provides for the ANDSF conventional forces and specialized forces. ANDSF Capabilities Reportedly Continue to Improve; DOD Has Identified Several Capability Gaps and Initiated Efforts to Address Them DOD Has Reported the ANDSF Generally Continue to Improve Their Capabilities but Rely on Coalition Forces to Fill Several Critical Capability Gaps Since Resolute Support began, the ANDSF have improved some capabilities related to the functional areas and capability pillars described above, but face several capability gaps that leave them reliant on coalition assistance, according to publicly available DOD reporting. DOD defines capability as the ability to execute a given task. A capability gap is the inability to execute a specified course of action, such as an ANDSF functional area or a capability pillar (see tables 1 and 2 above). According to DOD guidance, a gap may occur because forces lack a materiel or non-materiel capability, lack proficiency or sufficiency in a capability, or need to replace an existing capability solution to prevent a future gap from occurring. According to DOD reporting on the Afghan security ministries, ANA corps, and ANP zones, the ANDSF generally have improved in some capability areas since Resolute Support began, with some components performing better than others. For example, DOD has reported that the Afghan ministries have improved in operational planning, strategic communications, and coordination between the Ministry of Interior and Ministry of Defense at the national level. In general, the ANA is more capable than the ANP, according to DOD reporting. According to DOD officials and SIGAR reporting, this is due, in part, to the ANA having more coalition advisors and monitoring than the ANP. DOD officials also noted that the Ministry of Interior, which oversees the ANP, and Afghanistan’s justice system are both underdeveloped, hindering the effectiveness of the ANP. Corruption, understaffing, and training shortfalls have also contributed to the ANP’s underdevelopment, according to DOD and SIGAR reporting. The Afghan Special Security Forces are the most capable within the ANDSF and can conduct the majority of their operations independently without coalition enablers, according to DOD reporting. DOD and SIGAR reports have attributed the Afghan Special Security Forces’ relative proficiency to factors such as low attrition rates, longer training, and close partnership with coalition forces. The Afghan Air Force is becoming increasingly capable, and can independently plan for and perform some operational tasks, such as armed overwatch and aerial escort missions, according to DOD reporting. However, DOD has reported that the ANDSF generally continue to need support in several key areas. For example, as of December 2017, DOD reported several ministerial capability gaps, including force management; logistics; and analyzing and integrating intelligence, surveillance, and reconnaissance information. DOD also reported that, as of December 2017, the ANA and ANP continued to have capability gaps in several key areas, such as weapons and equipment sustainment and integrating fire from aerial and ground forces. The ANDSF rely on support from contractors and coalition forces to mitigate capability gaps in these key areas. For some capability areas, such as aircraft and vehicle maintenance and logistics, the ANDSF is not expected to be self- sufficient until at least 2023, according to DOD reporting. According to DOD officials and SIGAR reporting, coalition and contractor support helps mitigate ANDSF capability gaps in the immediate term but may make it challenging to assess the ANDSF’s capabilities and gaps independent of such support. For example, vehicle and aircraft maintenance contractors are responsible for sustaining specific operational readiness rates for the equipment they service. While this helps ensure that ANDSF personnel have working equipment to accomplish their mission, thereby closing an immediate capability gap, it may mask the ANDSF’s underlying capabilities and potentially prolong reliance on such support, according to DOD officials and SIGAR reporting. DOD and the ANDSF Have Plans and Initiatives in Place to Address Some ANDSF Capability Gaps DOD and the ANDSF have begun implementing plans and initiatives that aim to strengthen ANDSF capabilities. These include the following, among others: ANDSF Roadmap. In 2017, the Afghan government began implementing the ANDSF Roadmap—a series of developmental initiatives that seek to strengthen the ANDSF and increase security and governance in Afghanistan, according to DOD reporting. The Roadmap is structured to span 4 years, but DOD has reported that its full implementation will likely take longer than that. According to DOD reporting, the Roadmap aims to improve four key elements: (1) fighting capabilities; (2) leadership development; (3) unity of command and effort; and (4) counter-corruption efforts. Under the Roadmap’s initiative to increase the ANDSF’s fighting capabilities, DOD and the ANDSF have begun implementing plans to increase the size of the specialized forces. Specifically, DOD reports that the ANDSF plans to nearly double the size of the Afghan Special Security Forces by 2020 as an effort to bolster the ANDSF’s offensive reach and effectiveness. The Afghan Special Security Forces are to become the ANDSF’s primary offensive force, the conventional ANA forces are to focus on consolidating gains and holding key terrain and infrastructure, and the conventional ANP forces are to focus on community policing efforts. In addition, to provide additional aerial fire and airlift capabilities, the ANDSF began implementing an aviation modernization plan in 2017. The aim is to increase personnel strength and the size of the Afghan Air Force and Special Mission Wing fleets by 2023. Enhanced vehicle maintenance efforts. To help improve the ANDSF’s vehicle maintenance abilities, DOD awarded a National Maintenance Strategy Ground Vehicle Support contract, which, according to DOD officials, became fully operational in December 2017. The National Maintenance Strategy Ground Vehicle Support contract consolidated five separate vehicle maintenance and training contracts into a single contract and contains provisions for building the capacity of ANDSF and Afghan contractors to incrementally take control of vehicle maintenance over a 5-year period. Additional U.S. military personnel. As part of the South Asia strategy, the United States committed 3,500 additional military personnel to increase support to its missions in Afghanistan. According to DOD reporting, most of the additional personnel will support the Resolute Support mission, providing more advising and combat enabler support to the ANDSF. Additionally, in March 2018, the United States began deploying a Security Force Assistance Brigade—a new type of unit made up of U.S. Army personnel with expertise in training foreign militaries—to Afghanistan. The Security Force Assistance Brigade will advise conventional and specialized forces at and below the corps and zone levels and will accompany and support ANA conventional forces at the battalion level in ground operations as needed, according to DOD and SIGAR reporting. DOD Has Some Information on ANDSF Specialized Forces’ Ability to Operate and Maintain U.S.-Purchased Equipment but Has Limited Reliable Information on Its Conventional Forces DOD Advisors Embedded with Specialized Forces Provide Some Information on Those Forces’ Capabilities DOD collects some reliable information about the operation and maintenance abilities of ANDSF specialized forces, in part because advisors are embedded at the tactical level with the specialized forces, according to DOD officials. Specifically, U.S. and coalition forces advise specialized forces at the tactical level under Resolute Support because building ANDSF aviation and special operations abilities are considered particularly important, according to DOD reporting. DOD officials told us that since U.S. and coalition forces are embedded at the tactical level for specialized forces, they can monitor, assess, and report on tactical abilities, including the ability to operate and maintain equipment. Our analysis of information provided by DOD about the Afghan Air Force’s ability to operate and maintain MD-530 helicopters illustrates that DOD has some detailed information about specialized forces. TAAC-Air advisors help train Afghan pilots and maintainers and collect information on their tactical abilities. For example, TAAC-Air advisors track the percentage of maintenance performed by Afghan Air Force maintainers and aircraft operational readiness rates, according to DOD officials. According to DOD reporting and officials, as of December 2017, the Afghan Air Force could independently conduct MD-530 helicopter operations for short intervals without contractor support but relied on contractors to perform the majority of maintenance and sustainment activities. See appendix II for more information on the Afghan Air Force’s ability to operate and maintain MD-530 helicopters. DOD Advisors Have Limited Contact with Conventional Forces in the Field, Yielding Little Information on Their Ability to Operate and Maintain Equipment U.S. and coalition forces perform high-level assessments of the ANDSF conventional forces’ capabilities at the corps and zone levels but do not assess their tactical abilities, such as the ability to operate and maintain equipment, according to DOD officials. For example, U.S. and coalition forces assess the ANA and ANP conventional forces in quarterly ANDSF Assessment Reports, but these reports are at the corps and zone headquarters levels, and are not meant to provide an evaluation of the entire ANDSF, according to DOD reporting. DOD officials stated that other U.S.- and coalition-produced reports and assessments, such as DOD’s semiannual Section 1225 reports to Congress, semiannual periodic mission reviews, and annual Afghanistan Plans of Record, provide some information on the ANDSF’s high-level capabilities. However, according to DOD officials, these reports do not routinely assess the conventional forces’ ability to operate and maintain equipment. According to DOD officials, DOD does not assess conventional forces’ tactical abilities because advisors have had little or no direct contact with conventional units below the corps and zone levels, and thus do not collect such information on conventional forces. Specifically, under Resolute Support, U.S. and coalition forces have not embedded with the conventional forces below the corps and zone levels except in limited circumstances. Since U.S. and coalition forces do not collect firsthand information on the conventional units’ tactical abilities, they rely on those units’ self-reporting for information on ANDSF abilities below the corps and zone levels, which, according to DOD officials, may be unreliable. ANDSF reporting is not verified by U.S. officials and can be unreliable in its consistency, comprehensiveness, and credibility, according to DOD officials and SIGAR. For example, the ANDSF produce a monthly tracker on vehicle availability, maintenance backlog, repair times, and personnel productivity, but DOD officials told us that the trackers are of questionable accuracy. Our analysis of information provided by DOD about the ANDSF’s ability to operate and maintain tactical and nontactical radios illustrates the limited amount of information DOD has on ANDSF conventional forces’ tactical abilities. Specifically, DOD officials could not say how well ANDSF personnel on the front lines operate radios in the field and had only limited information on the ANDSF’s ability to maintain radios. For example, the officials noted that the ANA conventional forces can perform some unit-level radio repairs but that complex ANA radio maintenance and all ANP radio maintenance is conducted by contractors. DOD officials at Resolute Support headquarters told us that they provide ministerial- level advising on how to manage ANDSF radio systems and do not provide tactical advising or inventory control for radios. See appendix III for more information on the ANDSF’s ability to operate and maintain radios. Our analysis of information provided by DOD about the ANDSF’s ability to operate and maintain Mobile Strike Force Vehicles (MSFV) highlights the limited amount of information DOD has on ANDSF conventional forces’ tactical abilities compared with specialized forces. DOD officials were able to provide operation and maintenance information for MSFVs that had transferred to the specialized forces as of January 2018 but were unable to provide operation and maintenance information for any other MSFVs. The ANDSF began transferring one of the ANDSF’s two MSFV brigades from the conventional to specialized forces in August 2017, according to DOD officials. As part of this transfer, NSOCC-A advisors—who provide tactical-level advising for the Afghan Special Security Forces—assumed oversight for the first brigade from Resolute Support headquarters advisors. DOD officials stated that the ANDSF’s ability to operate and maintain MSFVs in this brigade prior to the transfer was unknown, as neither Resolute Support headquarters nor the ANA had assessed this. The operation and maintenance abilities of the second brigade, which is still in the conventional forces, remains unknown. DOD officials at NSOCC-A were able to provide information such as inventory and mission capability rates for the MSFVs that had transferred, but only for the short period of time the vehicles had been under the control of the specialized forces. DOD officials told us that NSOCC-A plans to collect more information on the specialized forces’ ability to operate and maintain MSFVs as they are transferred. See appendix IV for more information on the ANDSF’s ability to operate and maintain MSFVs. In the absence of embedded advisors at the tactical level, DOD has not implemented alternative approaches to collect reliable information about the conventional forces’ ability to operate and maintain equipment. Federal internal control standards state that U.S. agencies should obtain and process reliable information to evaluate performance in achieving key objectives and assessing risks. DOD officials acknowledged that some of the plans described above that DOD and the ANDSF have begun implementing to address capability gaps may provide opportunities for DOD to collect more reliable information on the conventional forces’ ability to operate and maintain U.S.-purchased equipment. For example, the National Maintenance Strategy Ground Vehicle Support contract requires that contractors regularly report the total work orders received, work in progress, and completed maintenance work performed by ANDSF personnel as well as vehicle availability rates, which may be more reliable than the ANDSF’s monthly report on vehicle availability. In addition, the Security Force Assistance Brigade may be able to collect and report on the tactical abilities of units they advise and accompany on missions since they are being deployed at or below the corps and zone levels. However, as of June 2018, DOD officials had not decided which, if any, of these options to pursue. Without reliable information on the equipment operation and maintenance abilities of ANDSF conventional forces, which represent nearly 75 percent of the ANDSF, DOD may be unable to fully evaluate the success of its train, advise, assist, and equip efforts in Afghanistan. Conclusions The United States invested nearly $84 billion in Afghan security in the 17- year period spanning fiscal years 2002 through 2018, but DOD continues to face challenges to developing a self-sustaining ANDSF. While DOD has reported the ANDSF have improved in several capability areas, they continue to face critical capability gaps, impeding their ability to maintain security and stability in Afghanistan independent of U.S. and coalition forces. Moreover, DOD lacks reliable information about the degree to which conventional forces—which make up about three-quarters of the ANDSF—are able to operate and maintain U.S.-purchased equipment. This limits DOD’s ability to fully evaluate the success of its train, advise, assist, and equip efforts in Afghanistan. Recommendation for Executive Action The Secretary of Defense should develop and, as appropriate, implement options for collecting reliable information on the ANDSF conventional forces’ ability to operate and maintain U.S.-purchased equipment. (Recommendation 1) Agency Comments We provided a draft of this report to DOD and State for comment. DOD declined to provide written comments specifically on this public version of the report, but DOD’s comments on the sensitive version of this report are reprinted in appendix V. The sensitive version of this report included two recommendations, which DOD cited in its comments on the draft of the sensitive report. One of those recommendations related to information that DOD deemed to be sensitive and that must be protected from public disclosure. Therefore, we have omitted that recommendation from DOD’s comment letter in appendix V. This omission did not have a material effect on the substance of DOD’s comments. In its comments, DOD concurred with the recommendation we made in this version of the report and stated it will take steps to implement it. DOD also provided technical comments, which we incorporated as appropriate. The Department of State had no comments. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and the Secretary of State. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions about this report please contact me at (202) 512-7114 or farbj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology House Report 114-537 associated with the National Defense Authorization Act for Fiscal Year 2017 included a provision for us to review the Afghan National Defense and Security Forces’ (ANDSF) capability and capacity to operate and sustain U.S.-purchased weapon systems and equipment. This report is a public version of a sensitive report that we issued on September 20, 2018. Our September report included three objectives, including one on the extent to which DOD considers ANDSF input and meets their needs when identifying equipment requirements. DOD deemed the information related to that objective to be sensitive, which must be protected from public disclosure. Consequently, we removed that objective and a related recommendation from this public report. This version includes information on the other two objectives: (1) what has been reported about ANDSF capabilities and capability gaps and (2) the extent to which DOD has information about the ANDSF’s ability to operate and maintain U.S.-purchased equipment. Although the information provided in this report is more limited, the report uses the same methodology for the two objectives as the sensitive report. To identify what has been reported about ANDSF capabilities and capability gaps, we reviewed North Atlantic Treaty Organization (NATO) and DOD documents and reports, such as DOD’s semiannual Section 1225 reports to Congress, produced after the start of the NATO-led Resolute Support mission on January 1, 2015. To determine what steps DOD and NATO have taken to try to address gaps, we reviewed reports the Center for Naval Analyses produced for DOD, as well as DOD and NATO documents and reports produced after January 1, 2015, and reports from GAO, the Special Inspector General for Afghanistan Reconstruction (SIGAR), and the DOD Inspector General. We also interviewed Center for Naval Analyses representatives and DOD officials in the United States and Afghanistan, including DOD officials at the Combined Security Transition Command–Afghanistan (CSTC-A) and in the Office of the Undersecretary of Defense for Policy (OSD-P) who helped create the DOD reporting we reviewed. To determine the extent to which DOD has information about the ANDSF’s ability to operate and maintain U.S.-purchased equipment, we reviewed DOD documents and reports and interviewed DOD officials in the United States and Afghanistan, including DOD officials who advise the ANDSF. We also reviewed federal internal control standards to determine what responsibilities agencies have specifically related to information collection. To provide illustrative examples of information DOD has about the ANDSF’s ability to operate and maintain U.S.- purchased equipment and what that information indicates about the ANDSF’s abilities and challenges, we interviewed and analyzed written responses from DOD officials, including DOD officials who provide procurement and lifecycle management for some ANDSF aircraft and vehicles, about three equipment types—MD-530 helicopters, Mobile Strike Force Vehicles (MSFV), and radios. We selected these three equipment types from a list that we developed, for an August 2017 report, of key ANDSF equipment the United States purchased from fiscal years 2003 through 2016. We made our selections after reviewing DOD documentation and interviewing DOD officials regarding a number of considerations, such as (1) how critical the equipment is to the ANDSF’s ability to achieve its mission; (2) which ANDSF component uses the equipment (i.e., Afghan National Police, Afghan National Army, or both); (3) whether DOD intends to continue procuring the equipment for the ANDSF; and (4) whether the equipment had been in use at least 5 years. We collected detailed information about the ANDSF’s ability to operate and maintain MD-530 helicopters, MSFVs, and radios, as well as other key statistics DOD provided about the equipment, such as inventory, average lifespan, average cost, role, and training. This information was based mainly on DOD responses collected from January 2018 to February 2018 as well as DOD documents and reports produced after January 1, 2015. The total amount of MD-530s and radios authorized for procurement was based on DOD data that we collected for our August 2017 report on key ANDSF equipment the United States purchased in fiscal years 2003 through 2016, which we supplemented with additional data DOD provided on U.S.-purchased equipment from October 1, 2016, through December, 31, 2017. The performance audit upon which this report is based was conducted from August 2016 to September 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate, evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We subsequently worked with DOD from September 2018 to October 2018 to prepare this public version of the original sensitive report for public release. This public version was also prepared in accordance with those standards. Manufacturer: MD Helicopters, Inc. U.S. Program Management Office: U.S. Army, Non-Standard Rotary Wing Aircraft Project Management Office Program Advising: Train Advise Assist Command–Air (TAAC-Air) The United States originally procured 6 unarmed MD-530s for the AAF for rotary wing training in 2011. In 2014, the United States purchased 12 armed MD-530s and began retrofitting the 5 remaining trainer helicopters with armament for operational missions to address a close air attack gap. MD- 530s were chosen to fill the gap over other aircraft, in part because they could be delivered relatively quickly as the AAF awaited A-29 light attack aircraft that were experiencing procurement delays, according to Department of Defense (DOD) officials. The United States procured additional MD-530s in 2015, 2016, and 2017 because of the aircraft’s positive impact on the battlefield, according to DOD officials (see fig. 3). Key Statistics Variants: All can be armed with .50-cal machine gun pods and/or 2.75 inch rocket pods. Total Authorized for Procurement: 60 as of December 31, 2017 Inventory: 25 as of January 2018 (30 are scheduled for delivery; attrition of 5 due to crashes and enemy fire) Average Lifespan: Absent mishaps, and with good maintenance, there is no defined lifespan limit for MD-530s, according to DOD officials. National Army and Afghan National Police, depending on the mission, in all but one region of Afghanistan, which is supported by other aircraft. MD-530s are typically tasked two at a time for missions, according to DOD officials. An MD-530 crew consists of a pilot and co-pilot, according to DOD. Average cost: $6.3 million per aircraft, including all electronic devices, weapons management systems, and weapons (excluding ordnance), according to DOD officials. o Division of labor is based on the individual crew members' capabilities, with one pilot handling navigation and communication while the other identifies targets and operates the weapon systems. Army pilot advisors at Kandahar Air Field, according to DOD officials. o MD-530 pilot training takes about 3 years (see fig. 4). GAO Comments 1. The GAO report number cited in DOD’s letter refers to a draft of the sensitive version of this report, which we issued on September 20, 2018. Prior to issuing that version, we changed its report number to GAO-18- 662SU to reflect its sensitive nature. That version of this report included two recommendations. The second recommendation has been omitted from DOD’s letter in this public version because it was related to information that DOD deemed to be sensitive. Appendix VI: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the contact named above, Joyee Dasgupta (Assistant Director), Kara Marshall, Katherine Forsyth, and Bridgette Savino made key contributions to this report. The team also benefitted from the expert advice and assistance of David Dayton, Neil Doherty, Justin Fisher, Ashley Alley, Cary Russell, Marie Mak, James Reynolds, Sally Williamson, Ji Byun, and J. Kristopher Keener.
Why GAO Did This Study Developing independently capable ANDSF is a key component of U.S. and coalition efforts to create sustainable security and stability in Afghanistan under the North Atlantic Treaty Organization (NATO)-led Resolute Support mission. The United States is the largest contributor of funding and personnel to Resolute Support, providing and maintaining ANDSF equipment, along with training, advising, and assistance to help the ANDSF effectively use and sustain the equipment in the future. House Report 114-537 included a provision for GAO to review the ANDSF's capability and capacity to operate and sustain U.S.-purchased weapon systems and equipment. This report addresses (1) what has been reported about ANDSF capabilities and capability gaps and (2) the extent to which DOD has information about the ANDSF's ability to operate and maintain U.S.-purchased equipment. To conduct this work, GAO analyzed DOD and NATO reports and documents, examined three critical equipment types, and interviewed DOD officials in the United States and Afghanistan. This is a public version of a sensitive report issued in September 2018. Information that DOD deemed sensitive has been omitted. What GAO Found Since the Resolute Support mission began in 2015, the Afghan National Defense and Security Forces (ANDSF) have improved some fundamental capabilities, such as high-level operational planning, but continue to rely on U.S. and coalition support to fill several key capability gaps, according to Department of Defense (DOD) reporting. DOD has initiatives to address some ANDSF capability gaps, such as a country-wide vehicle maintenance and training effort, but DOD reports it does not expect the ANDSF to develop and sustain independent capabilities in some areas, such as logistics, for several years. While DOD has firsthand information on the abilities of the Afghan Air Force and Special Security Forces to operate and maintain U.S.-purchased equipment, it has little reliable information on the equipment proficiency of conventional ANDSF units. U.S. and coalition advisors are embedded at the tactical level for the Air Force and Special Security Forces, enabling DOD to directly assess those forces' abilities. However, the advisors have little direct contact with conventional ANDSF units on the front lines. As a result, DOD relies on those units' self-assessments of tactical abilities, which, according to DOD officials, can be unreliable. GAO's analysis of three critical equipment types illustrated the varying degrees of DOD's information (see figure above). For example, DOD provided detailed information about the Air Force's ability to operate and maintain MD-530 helicopters and the Special Security Forces' ability to operate and maintain Mobile Strike Force Vehicles; however, DOD had limited information about how conventional forces operate and maintain radios and Mobile Strike Force Vehicles. DOD's lack of reliable information on conventional forces' equipment operations and maintenance abilities adds to the uncertainty and risk in assessing the progress of DOD efforts in Afghanistan. What GAO Recommends GAO recommends that DOD develop options for collecting reliable information on conventional ANDSF units' ability to operate and maintain U.S.-purchased equipment. DOD concurred with this recommendation.
gao_GAO-18-11
gao_GAO-18-11_0
Background CBP and Border Patrol Operations along the Southwest Border Securing U.S. borders is the responsibility of DHS, in collaboration with other federal, state, local, and tribal entities. CBP, a component within DHS, is the lead agency for U.S. border security, and one of its top priorities is preventing, detecting, and apprehending illegal border crossers, and interdicting other illicit cross-border activity. The U.S. Border Patrol is the CBP component charged with ensuring security along border areas between ports of entry. To secure the nearly 2,000-mile southwest border, Border Patrol divides responsibility for border security operations geographically among nine sectors, as shown in figure 1. Within each sector, Border Patrol agents at stations are responsible for patrolling and responding to emerging threats within defined geographic areas, using CBP-owned roads and a network of roads owned by other federal, state, local, tribal, and private landowners. Agents are to identify and report any needed maintenance and repair requirements of the roads they use to patrol and respond to threats, according to CBP officials. Within CBP, the Office of Facilities and Asset Management and Border Patrol each have offices that oversee the maintenance and repair of roads and other TI that Border Patrol agents need to conduct operations. Office of Facilities and Asset Management’s FM&E oversees the necessary environmental and real estate plans, maintenance and repair contracts, and funding distribution. Within Border Patrol, ORMD oversees operational planning by collecting and managing maintenance requirements identified by sectors. ORMD also collaborates with FM&E in determining the amount of funding and resources each sector needs to address identified TI maintenance needs. Within ORMD, the Director of TI and support staff oversee all TI requirements and programs across all Border Patrol sectors. Southwest Border Road Ownership and Type The area along the southwest border is composed of federal, state, local, tribal, and private lands. Federal and tribal lands make up 632 miles, or approximately 33 percent, of the nearly 2,000 total border miles. State, local, and private lands constitute the remaining 67 percent of the border. Each of these entities, including CBP, owns and maintains roads that Border Patrol may use to patrol or to access TI along the border; however, Border Patrol’s ability to use these roads depends on various factors, including its statutory authorities. Border Patrol may access public roads—i.e., roads under the jurisdiction of a public authority such as a federal, state, local, or tribal entity, and open to public travel—to the same extent as other users. CBP may seek permission of the owner in order to use nonpublic roads (roads owned by a public entity but not open to the public) or private roads (roads owned by a private entity) located beyond 25 miles of the border. In addition, Border Patrol generally makes arrangements with landowners in order to address maintenance of their roads. As mentioned previously, owned operational roads are those roads that CBP owns, leases, or has an irrevocable interest in, and therefore has a right to maintain. Non-owned operational roads are roads that CBP may maintain through a license or permit, though the landowner may revoke the license or permit at any time. Therefore, CBP is not obligated to maintain these non-owned operational roads; any work to maintain and repair these roads is based on Border Patrol’s operational requirements. Certain authorities allow federal agencies to enter into agreements with other federal agencies for various goods and services. Under such authorities, CBP may be able to use its appropriated funds to contribute to the maintenance of public roads owned by other federal agencies, but not to the maintenance of public roads owned by state and local entities. State and local public roads, which CBP is under no obligation to maintain regardless of use, are not considered owned or non-owned operational roads, and therefore, are not included in the approximately 5,200 miles of roads used by CBP. Figure 2 shows an example of a CBP owned operational road providing direct access to CBP fencing, and figure 3 shows an example of a road owned by U.S. Fish and Wildlife Service and used by Border Patrol for patrolling. CBP Road Maintenance CBP received $25 million in fiscal year 2016 for necessary repairs to border fencing and border roads. For fiscal year 2017, CBP received an additional appropriation for operations and support, of which $22.4 million is for border road maintenance. CBP uses Comprehensive Tactical Infrastructure Maintenance and Repair (CTIMR) contracts to address maintenance of TI assets along the southwest border, including owned and non-owned operational roads. CTIMR contracts provide a mechanism for CBP to address both routine and urgent maintenance and repair of the roads Border Patrol uses for its operations by providing funds to contractors who perform the required maintenance. Routine maintenance and repair include work that is required due to normal wear and tear, deterioration due to age, and other damage not caused by severe weather events or suspected intentional sabotage. Urgent repair requirements are typically the result of severe weather events or suspected intentional damage. For the purposes of maintenance requirements and funding distribution, CBP divides the nine southwest border sectors into four work areas, with each work area operating under a separate CTIMR contract. The four work areas consist of the following sector groupings: (1) San Diego and El Centro sectors; (2) Yuma and Tucson sectors; (3) El Paso and Big Bend sectors; and (4) Del Rio, Laredo, and Rio Grande Valley sectors. CBP’s FM&E determines the contract amount for each work area over a 5-year contract period. Table 1 provides a breakdown of the cost incurred by CBP for road maintenance and repair by work area and sector for fiscal year 2016. CBP Has Various Authorities and Arrangements for Using and Maintaining Roads, but Documentation and Communication of Its Processes and Criteria for Distributing Maintenance Funding Are Limited CBP Has Authorities and Arrangements for Using and Maintaining Roads Border Patrol generally has access to public roads to the same extent as other users, and has certain authorities to use other federal, state, local, tribal, and private owned roads. According to local, tribal, and Border Patrol sector officials, CBP uses and is sometimes the primary user of roads owned by states, counties, cities, and localities, but does not have a specific appropriation to engage in public improvements, including the maintenance and repair of such roads it uses for border security operations. Public roads. Border Patrol has access to public roads—those under the jurisdiction of and maintained by a public authority (federal, state, local, or tribal entity) and open to public travel—to the same extent as other users of such public routes. While Border Patrol has authority to use such public roads for border security operations, CBP is statutorily prohibited from maintaining and repairing nonfederal (state, local, county, and city) public roads because performing such work without a specific appropriation could violate the Anti-Deficiency Act and 41 U.S.C. § 6303 which prohibits the U.S. government from making or authorizing an expenditure exceeding available appropriated funds. To ensure access to TI in proximity to the border by way of lands that are owned by public entities but not open to public use, Border Patrol uses various different arrangements, including easements, special use permits, and multiple use agreements, to gain access to such property. For example, Border Patrol obtained various easements from a city located at the border, granting it access to strategic locations to conduct surveillance of high illegal traffic areas, according to the city’s public works director. Other federal agency roads. CBP may obtain a special use permit or enter into interagency agreements with other federal agencies to address maintenance and repair of federal roads and land. CBP may also enter into informal cooperative and undocumented arrangements with other federal agencies to access certain roads the agencies use for conducting their operations but that are not open to the public (e.g., administrative roads). For example, Bureau of Land Management (BLM) officials in Tucson, Arizona, told us that Border Patrol uses BLM administrative roads that are open to other law enforcement agencies in the area, but not to the public. According to these BLM officials, maintenance agreements or reimbursements are not needed from Border Patrol or the other agencies that use the roads. Further, Border Patrol has access to all federal lands, as necessary, under a January 2017 Executive Order that requires the Secretary of Homeland Security, the Secretary of the Interior, and other relevant agency heads to grant Border Patrol, as well as authorized state and local officers, access to such lands. Private roads. Border Patrol has statutory authority to, without a warrant, access private lands (i.e., privately owned or otherwise nonpublic roads and land), but not dwellings, within 25 miles of the international border to prevent illegal entry of foreign nationals. According to CBP FM&E officials, no further real estate action is required to access these roads; however, this authority does not permit CBP to maintain and repair such private roads. Access to private roads and land beyond 25 miles from the border generally requires a warrant or permission of the landowner, and all maintenance would be provided for in an arrangement with the landowner. CBP may seek to establish mutually beneficial relationships, including through various arrangements with private landowners, to use, and as appropriate, maintain and repair certain private roads based on Border Patrol’s operational requirements, such as to enhance Border Patrol’s ability to perform operations. These arrangements include, but are not limited to licenses and permits, which are written and revocable consent from landowners for CBP’s specified use of their land. CBP may seek to maintain and repair the privately owned roads leading to TI located in proximity to the border. To do so, CBP secures land rights to maintain and repair these access roads through fee interests, easements, and leases. Border Patrol leverages various mechanisms to ensure access to the privately owned roads it needs to conduct its operations on the southwest border. For example, Border Patrol officials told us that they cultivate and maintain good relations with private landowners to ensure access to roads. CBP Does Not Consistently Document and Communicate Its Arrangements with Landowners CBP Uses CTIMR Contracts to Address Road Maintenance CBP addresses maintenance of roads, as well as all other TI it uses for its operations, through CTIMR contracts and agreements. CTIMR road maintenance involves a collaborative process that uses a prioritization scheme which, according to CBP’s 2015 Roads Policy Memo, ensures that in an environment of limited funding, CBP would fund maintenance and repair of owned operational roads first, followed by non-owned operational roads, where permitted. According to CBP officials, this process entails the following three steps: Step 1: Border Patrol stations identify road maintenance requirements on an ongoing basis and provide those requirements to sector and headquarters leadership for approval. Step 2: Once approved, sector road maintenance requirements are forwarded to FM&E for real estate and environmental clearance. Step 3: Environmentally cleared sector road maintenance requirements are prioritized and added to quarterly CTIMR maintenance work plans as the plans are developed. According to CBP FM&E officials, in order for sectors’ requested road maintenance to occur, three criteria must be met. First, CBP must obtain an agreement from the landowner (for non-owned operational roads) authorizing maintenance of the road. Second, the road must undergo an environmental analysis and obtain environmental clearance. Third, appropriated funds must be available for the maintenance. If all three criteria are met, CBP places the road requirement in its Work Management System—a database CBP uses to track and oversee all TI maintenance and repair work for its work plans, which are prioritized and executed every 90 days. Sector officials are responsible for reviewing each work plan and prioritizing maintenance and repair that are critical to border security operations, and communicating any updates to CBP officials for execution. Road requirements that are not funded in a given period are pushed to the next work plan, according to CBP officials. CBP Has Not Consistently Documented Arrangements with Landowners or Communicated Such Arrangements with Border Patrol Sectors CBP enters into various arrangements with federal, state, and local agencies and with some private landowners to maintain the roads it uses for its operations; however, it has not consistently documented its arrangements with these landowners or shared the arrangements it has documented with Border Patrol sector officials. Officials of six of the nine southwest Border Patrol sectors we contacted indicated that they do not document all arrangements for private road maintenance, while officials of one sector said they were unsure if all such arrangements were documented. Federal, state, and local agencies. CBP has documented arrangements with federal, state, and local agencies, including, but not limited to, interagency agreements and memorandums of understanding (MOU) with other federal agencies, and easements with state and local agencies. For example, CBP has an agreement with the U.S. Forest Service, through which it allocates $1.5 million to the U.S. Forest Service annually to maintain roads the Border Patrol Tucson sector uses in the Coronado National Forest. Tucson sector officials said that providing the funding to the U.S. Forest Service to do the actual road maintenance was less expensive than paying a private contractor to do the maintenance. This process is also more efficient because U.S. Forest Service employees are more familiar with the roads and forest area, according to sector officials. CBP also entered into an MOU with the National Park Service in 2012 that authorizes CBP to maintain and repair certain roads that Border Patrol uses for its operations in the Organ Pipe Cactus National Monument in southern Arizona. Although CBP has arrangements with some landowners to address road maintenance, it has not consistently documented arrangements with all such owners. Further, CBP has not shared documented arrangements with all relevant Border Patrol sector officials, including officials responsible for prioritizing sector road maintenance funding needs, which could hinder efforts to maintain roads. Officials of six of the nine southwest Border Patrol sectors we contacted indicated that they do not document all arrangements for private road maintenance while officials of one sector said they were unsure if all such arrangements were documented. Tucson sector officials told us that their sector works with other federal, state, local, tribal, and private landowners to address road maintenance; however, such maintenance is not always addressed through written arrangements. For example, the sector has documented agreements with the Arizona Department of Transportation for maintenance of 11 checkpoints Border Patrol has established on its roads. Conversely, it does not have a written agreement with the Tohono O’odham Nation, a federally recognized tribe, whose reservation straddles the border. Rather, sector officials have had informal arrangements with the tribe and the Bureau of Indian Affairs (BIA) for several years on maintenance of several of the tribe’s roads, including two frontage roads which BIA manages and Border Patrol uses routinely for its operations. Border Patrol sector officials cited various reasons for using and addressing non-owned operational road maintenance without documenting arrangements with the road owners. For instance, officials noted that maintaining roads can facilitate good relations with landowners thereby enabling Border Patrol’s access to roads. Officials also explained that keeping roads in good working condition, even in the absence of a documented agreement, is mutually beneficial to both Border Patrol and landowners. For example, according to Yuma sector officials, the sector addresses maintenance of the Marine Corps roads it uses for its operations although it does not have a documented agreement for maintenance. Yuma sector officials said that CBP FM&E drafted an MOU between CBP and the Marine Corps in 2013 that would allow Border Patrol maintenance personnel (or personnel contracted by Border Patrol) to access border roads for maintenance; however, the Department of Navy, on behalf of the Marine Corps, has not yet signed the MOU. In the absence of a written agreement, a CBP employee at Yuma sector performs maintenance on Marine Corps roads, at Marine Corps’ request, because according to officials, Border Patrol agents benefit from accessible roads. In some instances, CBP has documented arrangements with federal agencies, but has not shared those arrangements with all relevant Border Patrol sector officials, particularly those responsible for planning for and prioritizing sector road maintenance needs. For example, road maintenance planning officials at Big Bend sector told us that they do not have a documented agreement with the Big Bend National Park in western Texas to address maintenance and do not contribute toward maintenance of any of the park’s roads which Border Patrol uses routinely. They added that FM&E is working on a current project to determine how CBP and the National Park Service could share maintenance costs for their joint use of the park’s roads—an agreement that could extend to other parks in other sectors. However, CBP later provided us with a copy of an agreement it entered into with Big Bend National Park in July 2016. The agreement was effective from October 2016 through September 2017; however, Big Bend sector officials were not aware of this agreement at the time of our March 2017 meeting with them. Similarly, Yuma sector officials said that Border Patrol also helps maintain a DOI-owned road the sector uses routinely for its operations without a written agreement, because it is mutually beneficial and helps maintain good relations with DOI. However, CBP officials later provided us with a copy of an agreement with BLM, a component of DOI, which addresses maintenance of the BLM roads in question. The agreement, which was effective from September 2016 through September 2017, was executed in August 2016, 6 months prior to our March 2017 interview with Yuma officials; however, sector officials were not aware of the agreement at the time of our interview. In addition to CBP not consistently sharing documented arrangements with relevant Border Patrol sector officials, we identified instances where written maintenance agreements between CBP and the federal landowners had expired, despite Border Patrol’s continued need to access the roads covered by the expired agreements. For example, the U.S. International Boundary and Water Commission entered into a maintenance agreement with CBP in December 2005 for the resurfacing of approximately 100 miles of a levee road the Rio Grande Valley sector uses along the Rio Grande River. While this agreement expired in September 2015, the commission was allowing Rio Grande Valley sector officials to continue using the levee road at the time of our January 2017 visit to the sector, while a new MOU was being negotiated. International Boundary and Water Commission officials characterized the undocumented agreement Border Patrol was operating under as a verbal “gentleman’s agreement.” Similarly, El Centro sector officials told us that the sector does not have a documented agreement with BLM for use and maintenance of certain BLM roads and land. According to sector officials, agents work to maintain good relations with BLM even though Border Patrol can and does leverage its statutory authority and law enforcement mission to access BLM roads and land. CBP officials later provided us with a copy of an agreement with BLM that addresses maintenance of BLM roads in El Centro sector; however, the agreement had expired in December 2016, 3 months prior to our meeting with El Centro sector officials. Private landowners. CBP has obtained licenses from some, but not all, of the private owners whose roads the agency maintains. Also, it has not consistently shared the documented road maintenance arrangements it has with private landowners with Border Patrol sector officials. For example, CBP obtained a revocable license in July 2015 from a private gravel company that allows the Laredo Border Patrol sector to maintain and repair roadways on the company’s property for use in patrolling the border area. Laredo sector officials stated that sometimes they receive pushback from landowners regarding Border Patrol accessing their land, but in general, most landowners want Border Patrol on their property. Conversely, El Centro and El Paso sector officials reported that they do not have documented license agreements with private landowners regarding the maintenance of privately owned roads. In the El Centro sector, officials stated that they typically have verbal and not documented agreements with private landowners for maintenance. These officials stated, however, that documenting agreements would provide a clearer understanding of how privately owned roads are to be maintained. A number of factors contribute to the lack of documented road maintenance arrangements between Border Patrol and private landowners. First, some landowners choose not to pursue a license agreement with Border Patrol to address maintenance of their roads as a condition of access to the roads because they support Border Patrol’s mission and need the security provided by the agency. In these instances, landowners have no concerns about Border Patrol agents accessing their land without a documented agreement. For example, five private landowners we met with individually, as well as others we met with in three separate community group meetings, told us they did not have a documented license agreement with Border Patrol; but some of them nonetheless allow Border Patrol to continue using their roads without addressing maintenance. However, one private landowner we interviewed told us that regardless of whether a ranch owner wants Border Patrol agents on his or her property for the security they provide, the additional money the owner must spend to maintain his or her roads used by Border Patrol is a financial burden. Second, some landowners are not aware that Border Patrol can enter into arrangements with them to address maintenance of their roads. For example, two of the five landowners who lack documented license agreements with Border Patrol told us this. Third, some landowners are interested in maintenance agreements but have not received them. For example, three landowners told us they had requested an agreement to address maintenance of their roads; however, Border Patrol had not worked with them on such an agreement. Two of these landowners said they generally incur an additional maintenance cost due to Border Patrol’s regular use and lack of maintenance of their roads. For example, on our site visit to the Tucson sector, one landowner told us that Border Patrol uses approximately 37 miles of road on his ranch without a written license agreement to maintain the roads, although he had requested one from Border Patrol. He estimated that he spends approximately $3,000 per mile annually to repair the roads that Border Patrol predominantly uses. He, as well as two other landowners we interviewed, told us they have considered preventing Border Patrol from using their roads. Fourth, some private landowners do not want a documented maintenance agreement with Border Patrol. According to Border Patrol sector officials, some of these landowners would rather not have to be in compliance with any environmental regulations that may come with signing a formal license agreement with a federal agency and instead prefer a “handshake agreement.” In addition to not consistently documenting arrangements, Border Patrol sectors were not consistently aware of the documented arrangements CBP has with private landowners. For example, Big Bend sector officials told us that CBP does not have a documented license agreement with any private landowner in their sector. According to sector officials, the sector consists predominantly of private land, a vast majority of which is located beyond 25 miles of the border and therefore outside the area for which Border Patrol does not need a warrant to access private land. As such, to prevent these owners from denying access to their roads, sector officials told us they try to maintain good relationships with the owners of the roads Border Patrol uses but does not maintain, by addressing damage agents cause to their roads. Big Bend sector officials added that they discuss and verbally agree with landowners on any required road maintenance, relying on the relationships agents have established with those landowners to come to agreement. However, CBP headquarters officials subsequently provided us copies of five license agreements, all executed in August 2016, that CBP has with private landowners in the Big Bend sector. CBP officials also told us that an additional two license agreements were in the process of being finalized. They added that Border Patrol’s ranch liaisons, who serve as Border Patrol’s conduits to landowners, are typically aware of these and other license agreements with landowners in their sectors, and are responsible for making other sector officials aware of the existence of the agreements. We asked CBP FM&E and Border Patrol officials why arrangements for road maintenance are not consistently documented or shared with Border Patrol sectors. Officials from CBP FM&E, the office primarily responsible for managing documented road maintenance arrangements, including license agreements, said that agreements are documented based on operational need by Border Patrol and added that FM&E works with Border Patrol sectors to determine which roads need licensees. They also stated that all licenses and agreements are held in the FITT system and tracked in the eGIS. Officials from ORMD provided the following rationales regarding documenting and sharing agreements. First, ORMD officials stated that license agreements for road maintenance with private landowners are managed on a case-by-case basis, depending on the needs of the landowners and the Border Patrol sector. The standard is that a road must have both real estate and environmental clearance prior to receiving maintenance and repair. Second, according to these officials, not every legacy road license agreement has been transitioned over to CBP’s new system for documenting road maintenance, which may explain why neither the owner (especially of land that has been passed on from one generation to the next) nor sector officials know it exists and seek to renew it. In other instances, some historical use agreements have yet to be formally documented. According to ORMD officials, the operational impact to Border Patrol of undocumented agreements can be determined only on a case-by-case basis and will likely depend on the location of the road and the ability to use adjacent alternate roads. They added, however, that in general, the lack of documentation can slow Border Patrol’s access to some roads. ORMD officials stated that in the absence of documented agreements, Border Patrol takes great effort in maintaining relationships with landowners to ensure continued access to the roads it needs. In cases where landowners are apprehensive about entering into formal license agreements with the government, Border Patrol’s ranch liaisons continue to work with landowners to further engage the landowners about entering into a documented agreement. Standards for Internal Control in the Federal Government requires that agencies clearly document and communicate all transactions and other significant events, and make the documentation readily available for examination. According to these standards, the documentation may appear in management directives, administrative policies, or operating manuals and may be in paper or electronic form. Those standards also require that management internally communicate the necessary quality information throughout an agency, using established reporting lines to achieve the agency’s objectives. Without documenting and communicating the arrangements it has with landowners, Border Patrol has no record of what was agreed to with owners in terms of maintenance of roads, which could hinder Border Patrol efforts to access and maintain certain roads. Developing a policy and related guidance for documenting arrangements with landowners, as needed, and ensuring that the documented agreements are shared with all relevant Border Patrol sector officials could help Border Patrol work with road and land owners more consistently to address road maintenance. Such a policy could also better provide opportunities to owners who want formalized arrangements, and enhance the sectors’ ability to plan for road maintenance requirements. Border Patrol Has Not Clearly Documented or Shared Its Processes and Criteria Used to Distribute Road Maintenance Funding to Its Sectors Border Patrol uses any funding that remains after owned operational road requirements are addressed to maintain non-owned operational roads; however, Border Patrol has not clearly documented or shared the process and criteria it uses for prioritizing maintenance of the non-owned operational requirements with sector officials. After distributing CTIMR funds to address its owned operational road maintenance, there are thousands of miles of non-owned operational roads that do not receive funding for maintenance. CBP FM&E officials explained that there is not a dedicated budget for non-owned operational roads, and therefore, not sufficient funding to address all the roads in need of maintenance. Also, because CBP does not collect data on the frequency of its road use, CBP is limited in its ability to effectively dedicate funding for road maintenance. The funding to address maintenance and repair of non-owned operational roads is derived from two main sources. First, CBP has the option of redistributing excess funding from any unneeded owned operational road maintenance project, among the sectors. For example, if the roads in Tucson sector are not damaged as much as anticipated during the annual monsoon season, CBP can redistribute funds originally designated for Tucson sector for other road maintenance projects in other sectors within the same work areas. The redistribution of such funds is determined by Border Patrol’s Director of TI. Second, officials said that if funding from an additional appropriation is made available, as was the case in fiscal year 2016, they can use it to address non-owned operational road maintenance. Border Patrol makes decisions on how to prioritize maintenance of non- owned operational roads; however, the process and criteria it uses for making such funding decisions are not clearly documented and are not shared with Border Patrol sector officials. During the course of our review, we requested that ORMD provide a description of its prioritization process both verbally and in writing. ORMD officials provided us with a written description that included the following six steps for prioritizing non-owned road maintenance: Step 1: Review sectors’ past year priorities utilizing a road requirements working group composed of representatives of all divisions of the three Border Patrol directorates. Step 2: Receive and review planning guidance from Border Patrol senior leadership. Step 3: Identify current and emerging threats. Step 4: Review State of the Border Risk Methodology for updated risk levels. Step 5: Draft priority lists, utilizing the road requirements working group. Step 6: Brief, adjust, and obtain concurrence for priority lists utilizing the road requirements working group and executive governance. The document ORMD prepared for us also cites various criteria for making funding decisions about non-owned roads, including whether each proposed road requirement is considered a vulnerability. If it is considered a vulnerability, ORMD determines whether it is documented in the Capability Gap Analysis Process, and how the vulnerability ranks among other identified vulnerabilities within the station and sector where the road is located, and in the nation as a whole, to inform leadership. Further, according to the document, ORMD officials determine the urgency of funding the road requirement and whether it can be funded given available resources. ORMD officials identified various other factors that go into the decision- making process for prioritizing non-owned road maintenance. However, these factors were different from those criteria included in the document they prepared for us. For example, ORMD officials said that when prioritizing sectors’ non-owned road maintenance, planners must first consider sectors’ ranking on Border Patrol’s annual investment prioritization list, which is based on intelligence, threat level, and other information pertaining to each sector. ORMD officials stated that this list serves as a starting point for the decision-making process to prioritize sectors’ non-owned operational road maintenance requirements. Officials added that the investment prioritization list is intended to help them with the six-step maintenance prioritization process described above; however, not all factors they consider when making the decision as to which non-owned operational roads to maintain in each sector are documented. They explained that the majority of their personnel have been trained on the road maintenance planning process and are familiar with all factors that go into the decision-making process. ORMD officials said that sectors’ investment prioritization rankings are not shared with the sectors. They explained that they prefer to not share the list or sectors’ ranking with the sectors because this information is intended to guide their decision-making, but is not the only factor they use in determining sectors that should receive remaining funding for non- owned operational road maintenance. None of the nine sector officials we contacted reported that they were aware of the process and criteria ORMD uses to prioritize and fund maintenance of non-owned operational roads. Rio Grande Valley sector officials told us that funding of maintenance requirements for the Rio Grande Valley sector takes priority over funding of other sectors’ non- owned operational road requirements. However, these officials stated that they were unsure why this was the case, primarily because Border Patrol had not shared the process and criteria it uses for non-owned operational road maintenance decision-making. Standards for Internal Control in the Federal Government requires that agencies clearly document all transactions and other significant events, and make the documentation readily available for examination. According to these standards, the documentation may appear in management directives, administrative policies, or operating manuals and may be in paper or electronic form. Those standards also require that management internally communicate the necessary quality information throughout an agency, using established reporting lines to achieve the agency’s objectives. By clearly documenting and communicating the process and criteria it uses for making decisions on funding non-owned operational requirements, ORMD could better ensure that sector officials are aware of the process and criteria, and can therefore better plan for and anticipate funding to meet their sector road maintenance needs. Moreover, documenting and communicating the process and criteria by which it makes non-owned operational road requirements funding decisions would ensure Border Patrol has a record of the process not dependent on the persons with current knowledge of the process being in the same positions. Border Patrol Operations May Be Affected by CBP’s Inability to Maintain Certain Public Roads That Are in Poor Condition, but CBP Has Not Assessed Maintenance Options Border Patrol Officials Reported That Certain Public Roads in Poor Condition Affect Border Security Operations Border Patrol sector officials we interviewed reported that poorly maintained public roads negatively affect their ability to conduct security operations. Officials from six of the nine southwest border sectors reported that poorly maintained public roads negatively affect their ability to respond to threats because of limited road access or increased response times, and cause additional wear and tear on vehicles. For example, El Paso sector officials said that a 14-mile stretch of a public, county road they use to access a forward operating base is severely rutted, limiting agents’ ability to access the southernmost points of their patrol area. In addition, officials from Laredo sector told us that a 40- mile county-owned road in the western part of the sector is in such poor condition agents cannot always use it. The alternative route agents take adds approximately 90 minutes to their patrol time. Laredo sector officials also said that when agents do use roads like this one, it results in wear and tear on vehicles. Laredo sector officials reported that they had to contract for outside mechanics as a result of additional demands for vehicle repairs. Figure 4 documents the poor condition of the county road in Laredo sector. The extent to which Border Patrol operations are negatively affected by the poor conditions of certain public roads is unknown because, according to CBP and Border Patrol officials, Border Patrol does not collect or maintain data on the extent of its use of any non-owned roads, including public roads. According to officials, Border Patrol does not collect such data because it does not make road maintenance decisions based on how frequently it uses a road, but rather, on how critical the road is to its operations. Border Patrol officials said that they have assessed various ongoing or planned CBP data collection initiatives that Border Patrol could leverage to collect data that could identify how often it uses non-owned roads. For example, officials with CBP Enforcement Systems Division—the office responsible for integrating technology initiatives with operations in support of Border Patrol’s mission—said that CBP’s Blue Force initiative—a method, usually using Global Positioning System (GPS), of tracking the locations in real time of operational assets, including vehicles and agents, to better coordinate operations—would collect GPS tracking data. However, Border Patrol officials stated the Blue Force initiative and other GPS tracking initiatives have not received all planned funding amounts. As CBP and Border Patrol officials said they do not have data that identify the extent of Border Patrol’s use of non-owned roads, we gathered examples from each of the nine southwest Border Patrol sectors of state, county, city, and tribal public roads in poor condition that CBP is unable to maintain and that sector officials said negatively affect their ability to conduct operations. Table 2 provides examples of the public roads sector officials identified, including a description of the roads, and how the road conditions negatively affect Border Patrol’s operations. CBP Officials Reported That the Inability to Maintain Certain Public Roads That Are in Poor Condition May Be Impeding Border Patrol’s Relations with Local Governments and Communities CBP’s inability to address the maintenance of certain public roads Border Patrol regularly uses can negatively affect Border Patrol’s relations with local governments, according to CBP officials. Officials from two counties and one tribe we spoke with told us that in certain rural areas along the border, Border Patrol uses some public roads heavily or is the primary user, and its use creates more wear and tear on the roads than would ordinarily be caused by general public use. These officials said that their agencies are responsible for fully funding required maintenance of the roads they own; however, they may not address needed maintenance for two reasons. First, their agencies do not have sufficient funding because they do not have the necessary tax base to generate funds for extensive road maintenance. Second, with limited funding, agencies may prioritize roads the general public uses more frequently over rural roads used regularly by Border Patrol. These county officials and Border Patrol sector officials told us that CBP’s inability to offer any maintenance assistance for public roads Border Patrol needs for operations makes collaboration with local governments challenging and hurts Border Patrol’s credibility. For example, officials we met with in an Arizona county identified a 5-mile stretch of road within their county that Border Patrol uses frequently because it provides access to the border. County officials told us they currently spend $23,000 more each year to maintain the 5-mile road than they would typically spend on a similar stretch of road as a result of the wear and tear they attribute to Border Patrol’s use. Figure 5 shows potholes and deteriorating shoulders on the county road. In addition, officials from the Tohono O’odham Nation told us they do not have sufficient BIA funding to maintain a 28-mile, major, public thoroughfare leading to a Border Patrol forward operating base and the border. Tucson sector officials said they are likely the primary user of the southern end of the road and may create heavy wear and tear. These officials reported that BIA would require approximately $14.5 million to repair the 28-mile road; however, BIA receives approximately $26 million for road repairs annually to cover 29,000 miles of roads under its jurisdiction. Figure 6 shows the eroded condition of this tribal road. Officials from the Arizona county and tribe have requested Border Patrol’s assistance in maintaining public roads. As of July 2017, however, Border Patrol had not provided such assistance. Border Patrol sector officials also said relations between Border Patrol and local border communities can be negatively affected by poor road conditions, because the communities attribute the conditions to Border Patrol’s use. These relations are important as Border Patrol relies on good relations with communities to access roads owned by private landowners in the community to conduct operations, according to Border Patrol officials. Members of a community coalition in Arizona that meets regularly to discuss options for addressing maintenance of a poorly maintained public road that Border Patrol uses routinely told us that Border Patrol’s use of the public road creates conditions that negatively affect the local community and damage relations with Border Patrol. Similar to the negative effects Border Patrol officials reported, members of this community coalition told us they experience slower response times by emergency response vehicles and damage to vehicles from poor road conditions, resulting in higher vehicle maintenance costs. In addition, these private landowners told us poor road conditions have negatively affected the local economy. For example, residents of a town we met with that is located near recreational amenities reported a decline in tourism revenue. They stated that, in their view, the poor condition of roads Border Patrol routinely uses has contributed to declines in tourism. CBP Has Not Assessed Options for Addressing Maintenance of Nonfederal Public Roads in Poor Condition CBP and Border Patrol officials have discussed two options that, if implemented, could offer possible mechanisms for addressing maintenance of nonfederal public roads. However, officials also discussed challenges each option would present to CBP, and CBP has not assessed these or other options for addressing maintenance of the state, county, city, and other local roads it uses for its operations. First, CBP officials told us they have considered seeking a specific appropriation to maintain state and local (i.e., nonfederal) public roads through financial or labor assistance. However, CBP officials said that involvement in public road maintenance may raise liability considerations and potential conflicts with the agency’s primary mission. For example, CBP officials indicated that if CBP maintained nonfederal public roads, it could be subject to negligence claims in relation to the repairs it conducts. Additionally, CBP would require additional resources to negotiate necessary contracts with public authorities to ensure they spend money appropriately and to oversee the network of their roads that could be necessary for CBP’s operations, according to officials. In addition, the time and resources spent on road maintenance could divert Border Patrol from its primary mission of securing the borders, according to CBP officials. Second, CBP and local officials we met with discussed two grant options that could be informative in considering options to address the maintenance of public roads Border Patrol uses routinely. While the specific grants discussed may not apply to CBP or road maintenance, the officials provided them as examples of grants that promote cooperation between federal agencies and local governments. First, after securing necessary legal authorities, CBP could establish a grant program, which would allow CBP to provide funding to state and local entities for road maintenance. Officials suggested that such a program could also allow the public entities that own the roads to conduct the maintenance themselves, alleviating Border Patrol’s liability and resources concerns. For example, Border Patrol officials discussed the success they have experienced using Operation Stonegarden to leverage state and local resources for border security while building relations with local law enforcement. Operation Stonegarden provides funds for joint CBP, Border Patrol, and federal, state, local, and tribal law enforcement agency efforts to secure U.S. borders. These officials offered that a similar program could enable CBP to provide funding to public entities to maintain certain roads. Second, CBP and local officials identified federal funding for road maintenance available to public agencies and executed through other federal agencies that CBP may be able to contribute to. For example, officials of a public water drainage district and town we met with said they had previously applied for a Federal Lands Access grant. The Federal Lands Access Program supplements state and local resources for public roads, among other transportation related infrastructure, with an emphasis on high-use recreation sites and economic generators. The Federal Lands Access Program requires applicants to provide at least a 20 percent match of the project cost. Officials from the public water drainage district and town said another local public entity planned to help it with the match for this grant. If Border Patrol had an appropriation for non-owned road maintenance, it could potentially help public entities, like the water drainage district, meet the match for federal grants. As of July 2017, CBP and Border Patrol have not assessed or implemented any of the options described above for two predominant reasons. First, CBP officials said the options each have accompanying challenges, in addition to the liability and management issues discussed above. For example, an appropriation to maintain public roads would not likely be sufficient to cover all road maintenance for state, local, and tribal roads Border Patrol uses, according to CBP officials. They added that limited funding to maintain the roads would put CBP in a position to prioritize some public roads over others, which may further strain relations with some public entities. Second, as discussed above, CBP officials told us they do not currently have data that demonstrate the extent to which Border Patrol relies on all non-owned roads, including public roads, to conduct its operations. CBP officials also said they do not keep data on the condition of roads owned by public entities. Without data on CBP’s use of non-owned roads, determining a maintenance solution that uses an appropriate amount of resources would be challenging. Standards for program management call for program managers to assess programs on an ongoing basis. To ensure continued success, program managers can use feasibility studies to determine whether implementing program changes could help mitigate any negative impacts. Assessing the feasibility of options to ensure adequate maintenance of nonfederal public roads, where necessary, including data needs for determining the extent of its reliance on non-owned roads for border security operations, could lead to a possible solution for enhancing Border Patrol’s operations and its community relationships. Conclusion Border Patrol’s access to roads plays a key role in its ability to secure the nation’s land borders from terrorism and other threats. While Border Patrol has entered into maintenance arrangements with the federal, state, and private landowners whose roads it uses for its operations, CBP and Border Patrol officials told us they have not consistently documented these arrangements because the need for an agreement with a landowner is determined on a case-by-case basis. By not documenting the arrangements it has with landowners, Border Patrol has no record of what was agreed to with owners in terms of maintenance of roads, which could hinder Border Patrol efforts to access and maintain certain roads. Similarly, Border Patrol has not clearly documented or shared its process and criteria for determining which non-owned roads to maintain with its limited funding. By not clearly documenting and communicating the process and criteria it uses for making decisions on funding non-owned operational requirements, ORMD cannot reasonably ensure that sector officials are aware of the process and criteria, and therefore cannot ensure adequate planning for and anticipation of funding to meet sectors’ road maintenance needs requirements. In addition, Border Patrol generally has access to public roads and has certain authorities to use other nonpublic federal, tribal, and private owned roads; however, it does not have a specific appropriation for public improvements. Border Patrol agents reported experiencing negative effects to their operations, such as delayed response times, from using public roads that are generally in poor condition due to Border Patrol’s use and inability to maintain the roads; however, CBP has not assessed options for maintaining these roads, partly because it does not collect data that indicates the extent of its reliance on all non-owned roads. Without assessing options, including data needs, that may exist for addressing maintenance of nonfederal public roads, CBP may be missing feasible opportunities for addressing maintenance of the roads, thereby foregoing an opportunity to enhance Border Patrol’s ability to rapidly respond to threats at the border. Recommendations for Executive Action We are making the following three recommendations to CBP: The Commissioner of CBP should develop and implement a policy and related guidance for documenting arrangements with landowners, as needed, on Border Patrol’s maintenance of roads it uses to conduct its operations, and share these documented arrangements with its sectors. (Recommendation 1) The Commissioner of CBP should clearly document the process and criteria for making decisions on funding non-owned operational requirements and communicate this process to Border Patrol sectors. (Recommendation 2) The Commissioner of CBP should assess the feasibility of options for addressing the maintenance of nonfederal public roads. This should include a review of data needed to determine the extent of its reliance on non-owned roads for border security operations. (Recommendation 3) Agency Comments and Our Evaluation We provided a draft of this report to DHS, DOD, DOI, and USDA for review and comment. DHS agreed with our three recommendations. The department’s response is reprinted in appendix II. DHS and DOI also provided technical comments that we incorporated, as appropriate. In response to our first recommendation that CBP develop and implement a policy and related guidance for documenting road maintenance arrangements with landowners, and share these documented arrangements with its sectors, DHS concurred, stating that FM&E will issue updated guidance on addressing maintenance of assets on private land to Border Patrol and FM&E personnel located at the sectors. The updated guidance, according to DHS, will reference the agency’s 2011 and 2015 policy and procedures for owned and non-owned road maintenance, as well as points of contact for additional information on landowner maintenance agreements. DHS also concurred with our second recommendation that CBP clearly document the process and criteria for making decisions on funding non-owned operational requirements and communicate this process to Border Patrol sectors. DHS stated that Border Patrol will outline the process and criteria for making these funding decisions and communicate the process to Border Patrol sectors. DHS concurred with our third recommendation that CBP assess the feasibility of options for addressing the maintenance of non- federal public roads, including a review of data needed to determine the extent of its reliance on non-owned roads. DHS stated that Border Patrol, in collaboration with CBP FM&E, will review data on the extent of Border Patrol's use of non-owned roads for border security operations and develop a strategy that outlines options and assesses the feasibility for maintaining roads, as appropriate. These actions, if implemented effectively, should address the intent of our three recommendations. We are sending copies of this report to the appropriate congressional committees, the Secretaries of Homeland Security, Agriculture, Defense, and the Interior, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512- 8777 or gamblerr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in appendix VI. Appendix I: Selected Federal Agencies’ Programs for Addressing Maintenance of Roads the Agencies Use but Do Not Own We reviewed relevant authorities, policies, and procedures of three selected departments that maintain roads owned by others (non-owned roads) for conducting their operations—the Department of Defense (DOD), the U.S. Department of Agriculture (USDA), and the Department of the Interior (DOI). DOD addresses maintenance of all non-owned roads through its Defense Access Roads (DAR) program. The U.S. Forest Service (Forest Service) is a USDA component we identified that addresses maintenance of non-owned roads. While DOI officials stated that DOI is not authorized to directly address maintenance of the non- owned roads it uses for its operations, the Bureau of Indian Affairs (BIA), a component of DOI, partners with public agencies to address maintenance of non-owned roads that provide access to or within tribal lands, through the Tribal Transportation Program. We discuss the authorities, policies, and procedures utilized by the DAR program, the Forest Service, and BIA in more detail in the following sections. These authorities, policies, and procedures are specific to the respective programs, and therefore are not applicable to the U.S. Border Patrol (Border Patrol). In addition, U.S. Forest Service and BIA officials said that unlike the U.S. Forest Service and the BIA, Border Patrol is not a public road agency—a federal, state, local, or Indian government or instrumentality with jurisdiction over, and authority to finance, build, operate, or maintain, public roads. Further, Border Patrol has various authorities including the ability to access private land, and therefore roads on such land, located within 25 miles of the border, without a warrant. The information presented below is intended to illustrate how other selected federal departments and agencies address maintenance of non- owned roads. Department of Defense: Defense Access Roads (DAR) Program Background DOD and the Department of Transportation are jointly responsible for administering the DAR program. Established in 1956, the DAR program authorizes the Secretary of Transportation to use funds appropriated for the Department of Defense to fully or partially fund public road improvements and maintenance that are certified as important to national defense. The program provides a means for the military to pay its “fair share” of the cost of public road improvements and maintenance needed in response to sudden and unusual defense-generated traffic or road surface impacts, such as a significant increase in personnel at a military installation, or use of a road by an oversized or overweight military vehicle, and to help ensure adequate transportation capacity is in place when needed. According to DOD officials, the DAR program is primarily used to fund road construction to provide installation access and alternate routes to reduce congestion caused by an installation and for the maintenance of roads to support transportation of specialized military equipment traveling on public roads. Authority Through the DAR program, DOD is authorized to address the construction and maintenance of certain defense access roads which are certified to the Secretary of Transportation as important to the national defense. To implement its authorities, in 1978 DOD and the Federal Highway Administration (FHWA) together developed a set of DAR program eligibility criteria that specifies the types of roads DOD can improve. These roads include (1) a replacement road; (2) a public road that creates new access to a military facility; (3) a road on which traffic has doubled as a result of the military’s use; and (4) a rural county road that has limited carrying capacity and requires upgrade to sustain consistent movements of military equipment. DOD officials said that they use public roads like everyone else in the general public—that is, DOD components use the roads while the public owners (for example, a state, county, or city) maintain the roads—and are not authorized to address maintenance of a public road unless the road is determined by the DAR program to be a defense access road. DOD officials said they were not aware of any instances involving DOD’s use of private roads. If there is any such use, DOD officials stated that there would be an agreement in place for addressing maintenance of the private roads. DAR projects are funded from two sources—Military Construction funds and Operation and Maintenance funds. The particular source used for a project depends on the project’s work classification and dollar amount. Projects for new construction that exceed $1 million are submitted as line item requests in the President’s budget for authorization and appropriation in the Military Construction program. Maintenance and repair of existing roads under the DAR program is funded with Operation and Maintenance funds. Minor military construction projects costing $1 million or less may also be funded with Operation and Maintenance funds. DOD officials said that because there is not a dedicated funding for the DAR program, it competes with every other military requirement (including on-base construction requirements). Ultimately, funding is based on a project’s merit to meet a military mission. Policies and Procedures Under DAR program regulations, military installation commanders can initiate a request for assistance from DAR if there is a defense-related transportation need affecting the surrounding community. To initiate a DAR project, the local military base identifies the access or mobility requirement and submits a DAR needs report to the U.S. Army Military Surface Deployment and Distribution Command (SDDC). SDDC will then either conduct a DAR needs evaluation or request FHWA to make an evaluation of improvements that may be necessary, determine the scope of work to address the deficiencies, and develop a cost estimate. According to a document DOD officials provided on the DAR program, SDDC will determine if the proposed work meets the DAR program qualification criteria and if so, certify the road as important to national defense, thereby making it eligible for DOD funds. The military service operating the base is then responsible for submitting the budget request for the project funds through its normal planning, programming, and budgeting process. Once programmed by the military service, if the work is classified as new construction and exceeds $1 million, the funds must be authorized and appropriated by Congress. After Congressional approval, the funds are transferred to FHWA and allocated to the agency administering the project (federal, state, county or other local transportation authority). A project memorandum of agreement (MOA) establishes specific roles and responsibilities for the officials involved in the DAR project. Upon completion, long-term maintenance of the improvement becomes the responsibility of the owning highway authority. According to DOD officials, the most common DAR program maintenance projects involve maintenance and repair of rural county roads used by the Department of the Air Force to transport intercontinental ballistic missiles from their main base to remote locations. These roads are often gravel roads, but also include portions of paved roads. For operational reasons, missile equipment cannot be transported over roads that are rutted or washboarded; therefore, DOD is forced to maintain these roads to its standards, which are typically higher than the standards of the counties that own them, to ensure access and safety. DOD missile engineers coordinate with state and county transportation departments, as well as the FHWA, to execute the maintenance requirements. There are approximately 1,500 miles of gravel roads to be kept at missile transporter standards used by DOD. There is another 1,500 miles of paved roads used by DOD for the missile transporter mission; however, DOD does not generally maintain these roads, except in cases of an emergency (e.g., surface washout or extreme snow removal). In support of the missile transport requirement, DOD has an MOA with each county and state it works with under the DAR program. These MOAs are general in nature, outlining mostly the roles and responsibilities of DOD, as well as those of the state or county. DOD officials explained that if paved roads fall into disrepair, DAR missile engineers are generally in close contact with state and local officials and have a very good relationship with them to ensure the state or county maintains the road. Typically, the state and local transportation officials adequately maintain paved roads, while DOD generally maintains the unpaved roads. If the responsible state and local agencies do not have the necessary funds to maintain paved roads, DOD will look into using alternate routes for transporting the missiles, or other alternatives, but would not generally provide funding for the maintenance of paved roads, according to DOD officials. Department of Agriculture: U.S. Forest Service Background The mission of the U.S. Forest Service, a component of USDA, is to sustain the health, diversity, and productivity of the nation’s forests and grasslands to meet present and future needs. To accomplish this mission, the Forest Service manages and protects 154 national forests and 20 national grasslands in 43 states and Puerto Rico. The Forest Service uses a wide variety of roads to access national forest system lands. A large portion of these roads are owned and managed by the Forest Service; however, the agency also relies on roads which cross land managed and owned by other federal, state, local, and private landowners authorized by various types of easements, road use permits, or road rental agreements, to conduct its operations. Authority According to Forest Service officials, the Forest Service is a public road agency and therefore operates and maintains roads that are open to the public. In addition to these roads, Forest Service uses public roads like the general public—with the relevant public road agency bearing responsibility for maintenance of such roads. However, if traveling on a public road with a vehicle that is not standard for the particular road type, Forest Service would generally need to obtain a special-use permit as required by the relevant public road agency. Forest Service must also enter into agreements to use and maintain private roads. Conversely, if a road is located on an existing right of way that is owned by the Forest Service, and through private property, Forest Service does not need additional permission to access and maintain the road. Also, according to Forest Service officials, in the event of an emergency (fire, pursuit), Forest Service can access a private road without permission. Forest Service addresses maintenance of the owned and non-owned roads it uses for its operations using allocated funding used for most road restoration, maintenance, and repair, as well as funding from the FHWA (funding to address maintenance of a smaller subset of roads). Forest Service can enter into agreements with other public agencies for use and maintenance of the agency’s roads or land under various authorities. Funds available for forest development roads and trails are to be used by the Secretary of Agriculture to cover costs of construction and maintenance of such roads and trails, including those on experimental and other areas under Forest Service administration. A set formula is used to allocate Roads, Capital Improvement, Maintenance (CMRD) funding to each of nine Forest Service regions, and then to each forest. While CMRD funds can be shifted from one region to another, as needed, officials said that there are restrictions on how funding from the Federal Highway Administration can be distributed and spent. In the case of either funding source, each forest determines how to spend the funding it receives for road maintenance. The criteria used for making this road maintenance decision takes into account, among other things, other road construction and maintenance plans for the region where the forest is located, according to Forest Service officials. Policies and Procedures According to Forest Service officials, the agency predominantly maintains its own roads and expects other entities, such as counties, to maintain their own roads, regardless of how frequently Forest Service uses a particular road. Forest Service’s policy is to enter into road maintenance agreements with public agencies where there is a sufficient reason and available funding to do so. Forest Service meets annually with public road agencies and landowners to discuss existing and new road use agreements and maintenance plans, as well as shared road maintenance responsibilities, activities, and scheduled maintenance events. According to Forest Service officials, as of July 2017, the agency had issued 5,854 Forest Road and Trail Act Easements to public road agencies and landowners nationwide. These easements have clauses which direct the Forest Service and the grantees to enter into agreements to use and maintain each other’s roads. Individual forests are responsible for forming any agreements they need for road maintenance. The specific terms of these agreements are economically driven, primarily based on beneficial need to the Forest Service and the availability of funding, according to Forest Service officials. Officials said that Forest Service will maintain a public road if it enables needed access. Officials said they can execute MOUs with entities, such as counties, to share road maintenance costs. Forest Service also partners with other federal agencies on use of federal highways. The roads Forest Service typically uses are public-use and multi-use roads. According to Forest Service officials, there are not many instances in which Forest Service needs to access private roads. In instances where it does, Forest Service’s policy is to obtain a perpetual, motorized, public use easement; however, most private owners are hesitant to grant ownership interest. Officials said that they use one-time agreements on a small subset of roads to address wear and tear in specific instances and based upon the Forest Service commensurate road use. For example, officials said if Forest Service acquires land, but has not yet acquired the roads leading to the property, Forest Service will enter into a short-term agreement to maintain the roads until it acquires ownership of the roads. All agreements are made on a case-by-case basis, according to officials, but the focus for Forest Service is always on the needs of the agency. According to Forest Service officials, maintenance agreements are rare because most of the roads the Forest Service needs are already maintained at the level it needs them to be. There are not many examples of Forest Service needing the roads to be maintained at a higher standard than they already are. Forest Service officials also told us that the agency prioritizes maintenance of the roads it owns over maintenance of roads that are owned by others. Given the large network of roads under Forest Service’s jurisdiction, there is rarely excess funding available to contribute to the maintenance of roads owned by local government agencies, officials said. To compensate for its limited funding, officials added that Forest Service has helped local government agencies address maintenance of their roads by providing a funding match to help qualify these agencies for federal road maintenance grants. However, they said that they do this only on a case-by-case basis, and only when Forest Service and the local government agencies’ priorities align. Forest Service officials said that a lot of collaboration occurs between the Forest Service and other federal, state, and local agencies in order to finance road improvements and maintenance. Most of its collaboration is with states and counties. Forest Service collaborates with FHWA because the latter grants the necessary easements to states for forest highways which the Forest Service uses. Because counties get FHWA funds as well to maintain forest highways along with county road intersections, Forest Service works with counties on these roads to meet forest needs. Department of the Interior: Bureau of Indian Affairs Background DOI’s Bureau of Indian Affairs (BIA) is responsible for the administration and management of approximately 56 million acres of land held in trust by the United States for American Indians, Indian tribes, and Alaska Natives. BIA provides services, including transportation services, directly or through contracts, cooperative agreements, and grants, to approximately 1.9 million American Indians and Alaska Natives from the 567 federally recognized tribes. One of BIA’s mechanisms for addressing maintenance of non-tribal and non-BIA owned public roads is the Tribal Transportation Program (TTP). Through the TTP, the Secretaries of Transportation and the Interior pay the costs of eligible transportation projects involving tribal transportation facilities, and other appropriate public road facilities, among other activities. Public roads whose maintenance is addressed through the TTP include roads owned by states, cities, counties, and other federal agencies. The TTP is jointly administered by the BIA Division of Transportation and the FHWA Federal Lands Highway Office. Authority According to BIA officials, BIA and tribal governments are public authorities and are authorized to enter into agreements with other public agencies to maintain non-owned roads that meet the definition of transportation facilities that are eligible for assistance under the TTP. The responsibility to maintain roads owned by another public authority belongs to such authority with jurisdiction over the route (unless otherwise provided for in an agreement or other usage permit). According to BIA officials, a tribe or BIA may use TTP funds to maintain roads owned by others, but only in accordance with an agreement allowing the tribe or BIA to carry out maintenance activities on the roads and provided the public authority that owns the road cannot or will not use its funds to maintain its own road. Policies and Procedures BIA is organized into 12 regions, each of which has a TTP component that provides engineering, construction, and road maintenance services for highways, roads, bridges, trails, or transit systems that are located on or provide access to tribal land and appear on the National Tribal Transportation Facility Inventory. The 12 regions can enter into agreements with state and local governments to provide funding to maintain public roads the state and local governments own and that provide access to tribal lands when tribes have not assumed responsibility for administering the TTP. BIA enters into and administers these agreements for those tribes which do not have an agreement with BIA for transportation funding, known as “direct service tribes.” Tribes that have such an agreement with BIA, or FHWA, are responsible for administering the TTP and would enter into and administer agreements with state and local governments for maintenance of the roads they own that provide access to tribal lands. According to BIA officials, of the approximately 160,000 miles of roads that are eligible for TTP funding, BIA owns approximately 29,000 miles. The amount of funding distributed via the TTP is determined by a statutory formula based on several factors including historic funding, miles of roads in the National Tribal Transportation Facility Inventory in 2004 and 2012, population, and a supplemental takedown designed to assist certain tribes with small shares of funding relative to their fiscal year 2011 funding base. Prior to 2012, BIA allocated funding based on a regulatory formula that included needs data continuously updated by tribes. TTP funding can be used as the funding match state and local agencies need to qualify for federal transportation improvement grants, depending on the transportation needs of tribal governments. According to 23 U.S.C § 202(f), TTP funding is not intended to replace the funding state and local governments receive for planning, design, construction, and maintenance for their public roads. Appendix II: Comments from the Department of Homeland Security Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Meg Ullengren (Assistant Director), Edith Sohna, and Colleen Corcoran made key contributions to this report. Also contributing to the report were David Alexander, Eric Hauswirth, Terence Lam, John Mingus, Sasan J. “Jon” Najmi, Claire Peachey, and Adam Vogt.
Why GAO Did This Study To secure the southwest border between ports of entry, Border Patrol uses approximately 5,200 miles of roads, most of which are owned by other entities, both private and public. CBP estimates spending $12.5 million in fiscal year 2016 to maintain and repair roads Border Patrol uses for its operations, including roads CBP does not own. GAO was asked to review Border Patrol's use and maintenance of roads for its border security operations. This report examines the extent to which (1) CBP has processes and authorities to access and maintain roads for its security operations and (2) CBP's operations are affected by its use of public roads it cannot maintain, and options CBP could consider to address any needed maintenance. GAO selected three southwest border sectors to visit based on the sectors' total mileage of non-owned roads and number of apprehensions of illegal border crossers. GAO interviewed officials from Border Patrol, and from selected federal, state, local, tribal, and private and community organizations. The information collected from these entities is not generalizable, but provides valuable insights. What GAO Found U.S. Border Patrol, within the Department of Homeland Security's (DHS) U.S. Customs and Border Protection (CBP), generally has access to public roads and has certain processes and authorities to use other federal, state, local, tribal, and private owned roads for its operations. CBP may enter into arrangements or agreements to address maintenance of certain federal, state, local, and private roads, but CBP has not consistently documented these arrangements, or shared them with all relevant Border Patrol sector officials. This could hinder maintenance efforts and, therefore, Border Patrol's access to the roads. Six of the nine southwest Border Patrol sectors reported that they do not document all road maintenance arrangements and agreements. Developing a policy and guidance for documenting maintenance arrangements and agreements, as needed, could help all sectors more consistently work with landowners to address road maintenance. CBP has two categories for the roads it maintains: (1) roads that CBP owns and has a right to maintain (owned operational roads) and (2) roads that CBP does not own, but may maintain through a license or permit (non-owned operational roads). Border Patrol has established a process for prioritizing maintenance of owned operational roads, but it has not clearly documented the process and criteria for non-owned operational roads, or shared this information with sector officials. Moreover, no sector official GAO spoke with reported being aware of the process and criteria. By clearly documenting and communicating the process and criteria it uses to prioritize non-owned operational roads, Border Patrol could enable sectors to more adequately plan for and better anticipate funding to meet road maintenance needs. Border Patrol sector officials reported negative effects from using public roads in poor condition that they cannot maintain, such as limited road access and poor relations with local governments and border communities that attribute the poor road conditions to Border Patrol's regular use. However, the full extent of these effects is unknown due to lack of data on Border Patrol's use of non-owned roads. While CBP officials discussed options for addressing maintenance of non-federal public roads, including a specific appropriation or a grant program, it has not assessed the feasibility of these or other options. Assessing the feasibility of options, including a review of data needed to show Border Patrol's reliance on non-owned roads, including public roads, could lead to a possible solution for enhancing Border Patrol's operations and its community relationships. What GAO Recommends GAO recommends that CBP develop policy and guidance for documenting arrangements with landowners, as needed, and share the arrangements with its sectors; document and communicate the process and criteria for prioritizing funding of non-owned operational roads; and assess the feasibility of options, including data needs, for addressing the maintenance of non-federal public roads. DHS concurred with the recommendations.
gao_GAO-19-201
gao_GAO-19-201_0
Background Economic, Security, and Illicit Drug Trafficking Challenges in the Caribbean The countries of the Caribbean are diverse in size, culture, and level of development, and face various interrelated economic and security challenges. According to a recent International Monetary Fund report, Caribbean countries have recently fallen into a pattern of low growth and high debt, and those with tourism-intensive economies are characterized by high rates of unemployment. They have endured frequent natural disasters that reduced economic output and imposed reconstruction costs, as well as deep macroeconomic, financial, and structural challenges that have resulted in lower-than anticipated rates of economic growth, according to the same report. Recent reports emphasize that crime and violence in the Caribbean have inflicted widespread costs, generating a climate of fear for citizens and diminishing economic growth. These reports note that Caribbean countries have some of the highest per-capita murder rates in the world, with assault rates that are significantly above the world average, and high crime rates have stretched the capacity of their criminal justice systems, which are small and largely characterized as weak and ineffective. Because of their location between drug production sources in South America and consumer markets in North America and Europe, Caribbean countries have become a major transit zone for illicit drugs, particularly drugs destined for the United States. With long coastlines that are difficult to comprehensively patrol, and limited air and sea capabilities to support interdictions, the Caribbean countries often struggle to control territorial waters and stem the flow of drugs northwards. Establishment of CBSI Over the years, the United States has created several initiatives to engage with the countries of the Caribbean Basin region to address economic and political issues. In May 2010, the United States, Caribbean Community member states, and the Dominican Republic formally launched CBSI to strengthen regional cooperation on security. At its inception in 2010, CBSI’s aim was to increase citizen safety through provision of U.S. foreign assistance to CBSI partner countries to reduce illicit trafficking, improve public safety and security, and promote social justice; these three “pillars” remain the overall goals of CBSI. There are thirteen CBSI partner countries—Antigua and Barbuda, Bahamas, Barbados, Dominica, the Dominican Republic, Grenada, Guyana, Jamaica, St. Kitts and Nevis, St. Lucia, St. Vincent and the Grenadines, Suriname, and Trinidad and Tobago (see fig. 1). U.S. Government Agencies Involved in Funding and Implementing CBSI Activities The U.S. agencies and offices currently funding CBSI activities are State’s Bureau of International Narcotics and Law Enforcement Affairs (INL), State’s Bureau of Political-Military Affairs (PM), and USAID (see fig. 2). State’s Bureau of Western Hemisphere Affairs (WHA) plays a coordinating role for CBSI. To implement CBSI activities, State and USAID partner with nongovernmental and multilateral organizations as well as other government agencies, such as DOD and the Departments of Homeland Security, Justice, and Treasury. U.S. Government Agencies Have Allocated More Than $560 Million in CBSI Funds from Fiscal Years 2010 through 2018 to Support Various Security Activities From fiscal years 2010 through 2018, U.S. agencies have allocated more than $560 million in funding for CBSI activities. Since fiscal year 2012, annual allocations have remained relatively constant, ranging between $56.6 million and $63.5 million. Of the 13 CBSI partner countries, U.S. agencies have provided the most CBSI funding to the Dominican Republic, Jamaica, and the countries covered by the Eastern Caribbean embassy. State and USAID disbursed funds to support activities in partner countries that improve law enforcement and maritime interdiction capabilities, support activities to train and otherwise improve the capabilities of national security institutions, prevent crime and violence, and deter and detect border criminal activity. These activities are generally aligned with the three pillars of CBSI. State and USAID Allocated More Than $560 Million to CBSI from Various Foreign Assistance Accounts From fiscal years 2010 through 2018, State and USAID allocated more than $560 million in funding for CBSI activities. Of that amount, U.S. agencies have disbursed or committed approximately $361 million for CBSI activities in the 13 CBSI partner countries and for region-wide activities. Funding for CBSI activities comes from a combination of U.S. foreign assistance accounts—mostly through INCLE, ESF, and FMF, with a small amount of funding provided through NADR and DA (see textbox). U.S. Foreign Assistance Accounts That Have Been Used to Fund Caribbean Basin Security Initiative (CBSI) Activities International Narcotics Control and Law Enforcement (INCLE): The Department of State (State)’s Bureau of International Narcotics and Law Enforcement Affairs (INL) manages the INCLE account, which provides assistance to foreign countries and international organizations to develop and implement policies and programs that maintain the rule of law and strengthen institutional law enforcement and judicial capabilities, including countering drug flows and combatting transnational crime. Generally, INCLE funds are available for obligation for 2 fiscal years and must be disbursed within 5 years of the end of the period of availability for new obligations. Economic Support Fund (ESF): State and the U.S. Agency for International Development (USAID) share responsibility for managing the ESF account. For CBSI activities, it is primarily USAID who uses ESF funds to assist foreign countries in meeting their political, economic, and security needs. Generally, ESF funds are available for obligation for 2 fiscal years and must be disbursed within 5 years of the end of the period of availability for new obligations. Foreign Military Financing (FMF): State’s Bureau of Political-Military Affairs manages the FMF account, which provides grants and loans to foreign governments and international organizations for the acquisition of U.S. defense equipment, services, and training. The Department of Defense is the main implementer of this funding. Previous acts appropriating funds for FMF have generally provided that such funds are available for obligation for 1 year, and deem such funds to be obligated upon apportionment. Nonproliferation, Anti-terrorism, Demining, and Related Programs (NADR): State manages the NADR account, which funds contributions to organizations supporting nonproliferation and provides assistance to foreign countries for nonproliferation, antiterrorism, demining, export control assistance, and other related activities. Generally, NADR funds are available for obligation for 2 fiscal years and must be disbursed within 5 years of the end of the period of availability for new obligations. Development Assistance (DA): USAID manages the DA account, which responds to long-term challenges to human and economic security by funding activities in areas such as economic growth and education. Generally, DA funds are available for obligation for 2 fiscal years and must be disbursed within 5 years of the end of the period of availability for new obligations. Since 2012, allocations have remained relatively constant each year, ranging between $56.6 million and $63.5 million. Table 1 summarizes the INCLE, ESF, NADR, and DA allocations and disbursements and the FMF allocations and commitments by year of appropriation. Appendix II includes a breakdown of allocated, obligated, and disbursed funds for INCLE, ESF, NADR, and DA accounts; appendix III includes a breakdown of FMF funding that State has allocated and committed to CBSI. Since fiscal year 2010, U.S. agencies have provided the most CBSI funding to the Dominican Republic, Jamaica, and the countries covered by the Eastern Caribbean embassy. These countries received approximately 66 percent of total CBSI allocations from fiscal years 2010 through 2018. Approximately 13 percent of total CBSI allocations went to the Bahamas, Guyana, Suriname, and Trinidad and Tobago, while 21 percent of total CBSI allocations went to regional activities. Table 2 provides a breakdown of allocated funds by country for CBSI activities. U.S. Government Agencies Support Various Security Activities throughout the Caribbean in Line with the Three CBSI Pillars State and USAID fund various security activities in partner countries. State uses INCLE and FMF funds to conduct activities in support of CBSI goals at all seven embassies, covering all 13 CBSI countries. State uses several different implementing mechanisms—including contracts, cooperative agreements, and interagency agreements. According to INL officials, INL has an average of 10-50 distinct ongoing activities within any individual country program at any given time, ranging from multi-year, multi-million dollar embedded advisory programs to one-time procurements for equipment or individual trainings. USAID uses ESF funds for activities in three missions—the Dominican Republic, Eastern and Southern Caribbean, and Jamaica. In general, USAID uses similar implementing mechanisms, but typically has fewer projects that cover multiple years. State primarily focuses on funding CBSI activities that fall within the pillar of reducing illicit trafficking, and USAID concentrates on funding activities within the pillar of promoting social justice. Both agencies fund activities in the pillar of improving public safety and security. Reducing illicit trafficking. State uses INCLE and FMF funds, through interagency agreements with DOD and other implementing partners, to increase Caribbean countries’ control over their territorial maritime domain and reduce illicit trafficking, such as narcotics and firearms, as the examples below illustrate. Eastern Caribbean. INL and PM have provided training and equipment to the Regional Security System, a collective defense organization of Eastern Caribbean countries whose responsibilities include regional law enforcement training and narcotics interdiction. For example, U.S. assistance funded the refurbishment of aircraft operated by the Regional Security System to provide equipment for intelligence, surveillance, and reconnaissance related to drug interdiction. Jamaica. INL and PM have provided boats to the government of Jamaica to increase the government’s capacity to conduct counternarcotic operations (see fig. 3). Throughout the Caribbean. INL supports activities providing training, technical assistance, policy guidance, and basic equipment to enhance the capacity of CBSI countries to combat illicit small arms and ammunition trafficking through operational forensic ballistics. Throughout the Caribbean. State uses an interagency agreement to support the Technical Assistance Field Team (TAFT) program. This program, supported by both FMF and INCLE funds, aims to build maritime capacity of partner countries throughout the Caribbean. The team is composed of 15 Coast Guard and DOD engineers, technicians, specialists, and logisticians, based at U.S. Southern Command, who assist Caribbean maritime security forces with maintenance and sustainment issues. The advisors have worked to implement inventory management systems within CBSI countries and conduct regular site visits to CBSI countries to assist in the maintenance and logistics of maritime assets. Promoting social justice. USAID and its implementing partners— multilateral and nongovernmental organizations, for the most part—use ESF funds in an effort to increase economic opportunities for at-risk youth and vulnerable populations, improve community and law enforcement cooperation, improve the juvenile justice sector, and reduce corruption in public and private sectors. Dominican Republic. USAID has provided assistance for community- based activities, such as the Community Justice Houses. These centers are designed to provide services related to the justice sector, such as public defense and mediation efforts for populations in areas of high violence that have limited access to traditional justice institutions. Dominican Republic and Barbados. USAID’s implementing partners work with at-risk youth to provide skills training and education for those individuals in vulnerable populations. Jamaica. USAID’s implementing partners work with youth in the juvenile justice system to provide marketable technical skills, life skills, and individualized psychosocial attention to assist in their reintegration into society. Eastern and Southern Caribbean. USAID partners use a community- based approach to crime prevention to identify the underlying causes of crime and violence by collecting standardized crime data across the region. Increasing public safety and security. State uses INCLE to fund activities to increase the rule of law and reduce transnational crime. USAID uses ESF to support public safety and security activities by funding training and support programs that aim to build institutional capacity for police and judicial systems. Jamaica. INL works to enhance the government of Jamaica’s capacity to disrupt and deter money laundering operations and other financial crimes by providing technical assistance, equipment and training for combating money laundering and financial crime, and for the seizure of criminally-acquired assets. Eastern Caribbean. INL uses training, technical assistance, equipment purchases, and operational support to combat financial crimes and increase asset forfeiture efforts. Dominican Republic. INL has provided funding for the government’s creation of a centralized emergency “911” response system to increase citizen safety and security. Dominican Republic. Both INL and USAID provide assistance to the Dominican National Police, and USAID’s implementing partners work with the judicial sector to improve the skills of prosecutors (see fig. 4). INL provides assistance to the Dominican National Police through funding training to increase police professionalization and supports training on enforcing legislation for prosecutors and judges. USAID funding works to support the reform and modernization of the Dominican National Police by strengthening the management capacity and accountability of the organization. USAID implementing partners also work with prosecutors to strengthen the criminal justice system in the Dominican Republic. State and USAID Undertake Some Planning and Reporting of CBSI Activities but the U.S. Government Cannot Assess Initiative-wide Progress The United States and Caribbean countries meet periodically to set strategic goals and to designate high-level priorities for the subsequent year, and U.S. agencies individually plan and report on CBSI activities on a country-specific basis through a variety of reporting mechanisms (see fig. 5). However, State has not created an initiative-wide mechanism for planning and reporting on CBSI activities and the U.S. government cannot assess initiative-wide progress. State and USAID Establish Strategic Goals and Priorities for CBSI with Partner Countries At the strategic and political level, U.S. government agencies and CBSI partner countries engage on a periodic basis to set strategic goals and to designate high-level priorities for the subsequent year. The process involves various technical working groups meeting throughout the year, culminating in the Caribbean-United States Security Cooperation Dialogue meeting, attended by the Caribbean Community, the Dominican Republic, the United States, and other interested Caribbean states and international partners. At the 2017 meeting, participants set strategic goals by reaffirming the initiative’s three pillars of substantially reducing illicit trafficking, advancing public safety and security, and promoting social justice. Participants also produced a high-level plan of action that aimed to strengthen commitment and accountability of the countries involved and to ensure political support for implementation. Within each goal, the plan identified high-level priorities such as counternarcotics, anti-money laundering, border security, justice reform, and anti- corruption. State and USAID Generally Plan CBSI Activities on a Country- Specific Basis At the implementation level, State and USAID separately plan and report their CBSI activities, generally on a country-specific basis. Within State, INL develops multi-year country plans that are the basis for making decisions on CBSI activities for each country, according to INL officials. The plans describe objectives within a country for program areas such as law enforcement professionalization, rule of law, and counternarcotics, and include performance indicators related to those program areas. INL developed a country plan for each of the seven embassies in CBSI from fiscal years 2017 through 2021. In addition, a portion of INL’s CBSI funding is devoted to regional activities (i.e., activities that are implemented in more than one CBSI country), and INL developed the CBSI Regional Implementation Plan to describe the objectives and performance indicators for regional activities. The CBSI activities that are funded through FMF are planned and implemented by DOD in coordination with PM. USAID uses its Country Development Cooperation Strategies (CDCS) as the basis for planning CBSI activities in each country, according to USAID officials. USAID developed a CDCS for each of the three missions that have a USAID presence among the CBSI countries—Eastern and Southern Caribbean, the Dominican Republic, and Jamaica. The strategies outline priorities for each mission and typically cover 5 years. In each of the CDCS, USAID outlines three development objectives, including one that is CBSI-related—on crime prevention and reduction— and two that are not CBSI-related—on climate change and health care. According to INL and USAID officials, coordination of CBSI activities between the two agencies primarily occurs at the embassy level through routine meetings. Officials at embassies in the CBSI countries also compile bimonthly reporting cables that contain information on selected CBSI activities. State’s WHA, which plays a coordinating role for CBSI, holds monthly coordination meetings for INL, PM, and USAID officials in Washington, D.C. to discuss high-level issues and upcoming events relevant to the initiative, as well as to prepare for meetings with Caribbean partners, according to officials. The U.S. Government Cannot Assess CBSI Initiative-wide Progress Because It Does Not Have an Initiative-wide Planning and Reporting Mechanism While State and USAID set strategic goals and priorities with CBSI partner countries and plan for and report on CBSI activities within each agency or bureau, State has not established a CBSI-wide planning and reporting mechanism that links agencies’ activities to the three overall CBSI goals. State and USAID typically use Integrated Country Strategies (ICS) to strategically plan in a given country, and Performance Plans and Reports (PPR) to assess progress made relative to the foreign assistance priorities in a given country. Each of the U.S. embassies that cover the CBSI countries has both an ICS and PPR. However, the PPRs for the individual CBSI countries are for bi-lateral funds, and the ICSs serve as a whole-of-U.S.-government strategy in a country. According to State officials, since CBSI is a regional initiative, CBSI activities are included in the scope of the relevant regional planning and reporting documents. These regional documents include the WHA Joint Regional Strategy and the WHA PPR. However, these documents represent the entire Western Hemisphere and are not specific to CBSI activities. The Joint Regional Strategy does not serve as a planning mechanism for CBSI-wide activities and does not establish CBSI specific targets or performance indicators. Moreover, while the PPR reports outputs and outcomes, CBSI results are aggregated with other regionally funded activities in the Western Hemisphere, such as the Central America Regional Security Initiative. For example, while the PPR may report the number of judicial personnel trained with U.S. government assistance, that number may include officials in the Dominican Republic, Jamaica, Honduras, or any other number of countries within the Western Hemisphere. Therefore, the most recent WHA PPR does not serve as a CBSI reporting mechanism as it is not possible to always know which results are related to CBSI activities, and the CBSI-wide outputs and outcomes can be indiscernible from other regional efforts. In 2012, State created the CBSI Results Framework, recognizing the importance of tracking initiative-wide results. The Framework included the three CBSI pillars—reducing illicit trafficking, improving public safety and security, and promoting social justice—and specified intermediate results, such as reducing drug demand in target areas, improving security at ports of entry, and improving community and law enforcement cooperation. Each of the intermediate results included performance indicators for measuring and reporting CBSI results. According to WHA officials, WHA had envisioned establishing baseline data, obtaining annual reporting from each embassy, and reporting on a subset of the indicators. However, neither State nor USAID currently use the framework to gauge overall progress of CBSI. State officials that we interviewed were not aware of the reason for discontinuing use of the framework and stated that the decision to discontinue using it pre-dated their tenure. According to State officials and our assessment of program documentation, State does not currently use the framework in any official capacity. While USAID officials stated that they continue to use the framework as internal guidance on CBSI’s direction, they stated that they do not use it to track progress. The delivery of U.S. foreign assistance often involves multiple agencies or a whole-of-government approach. We have previously identified key elements for effectively aligning foreign assistance strategies in situations where multiple agencies are working together to deliver foreign assistance, such as CBSI. These elements include, among others, the establishment of interagency coordination mechanisms, integration of strategies with relevant higher- or lower-level strategies, and assessment of progress toward strategic goals through the articulation of desired results, activities to achieve the results, performance indicators, and monitoring and evaluation plans and reports. We found that agencies that establish strategies aligned with partner agencies’ activities, processes, and resources are better positioned to accomplish common goals, objectives, and outcomes. For foreign assistance that involves multiple agencies, strategies that consistently address agencies’ roles and responsibilities and interagency coordination mechanisms can help guide implementation and reduce potential program fragmentation. The absence of a functioning CBSI-wide planning and reporting mechanism leaves open the possibility that State’s and USAID’s existing planning efforts may be inadequate in ensuring that activities are effectively coordinated to reduce fragmentation or overlap. In 2016, USAID contracted for an independent assessment of all of its CBSI activities. Since USAID implements CBSI in conjunction with other U.S. agencies, such as State, one of the objectives within the assessment was to determine the degree to which USAID’s activities were complementary with those of other U.S. agencies and whether there were instances of overlap. The assessment noted that coordination and information exchange between the agencies about individual CBSI activities and their components appeared to be relatively ad hoc and was primarily seen as the mandate of staff in the field, though at that level, information sharing and coordination had been widely variable. It noted that in general, the level and type of communication between USAID and INL tended to be influenced by personalities, and information was not shared systematically. The assessment concluded that there was a potential for overlap between USAID and INL and recommended that USAID and INL take several actions to strengthen information sharing and coordinate and align activities. This coordination is important since overlap or unintended competition between INL’s and USAID’s CBSI activities has been documented on at least one occasion. According to the fiscal year 2017 annual report submitted by an implementing partner for one of USAID’s activities in the Dominican Republic, the partner was directed to suspend several of its activities related to training to strengthen standards for criminal case preparation and training for police and prosecutors, reportedly because State realigned the task to INL. The report cited poor delineation of roles and relationships as an underlying cause. While State and USAID set strategic goals and plan and report on CBSI activities in individual countries, the U.S. government does not have a functioning initiative-wide planning and reporting mechanism that links CBSI activities to overall goals or specifies a means for assessing initiative-wide progress through articulation of desired results, performance indicators, and a monitoring and evaluation plan. Without such a mechanism that establishes consistent performance indicators across agencies, countries, and activities and determines baselines and targets, it is difficult to measure CBSI’s activities across the initiative, making it difficult to assess any progress made toward achieving CBSI’s overall goals. Consequently, the U.S. government has limited ability to evaluate CBSI’s successes and limitations and use such information to better guide future decision-making. State and USAID Established Objectives and Performance Indicators and INL Is Taking Steps to Improve Weaknesses in Program Monitoring USAID and implementing partners have established objectives and performance indicators for selected CBSI activities that we reviewed and have been measuring and reporting on progress for those activities. Within State, INL and implementing partners have established objectives and performance indicators for all of the activities that we reviewed, and INL and PM receive quarterly monitoring reports containing performance information on the TAFT program. In response to identified weaknesses in its program monitoring, INL is taking steps to improve program monitoring for its Western Hemisphere programs, which include CBSI activities. State and USAID Established Objectives and Performance Indicators for Selected CBSI Activities State and USAID policies related to program management—found in the FAM and ADS, respectively—require the establishment of objectives and performance indicators for program monitoring. We found that USAID and its implementing partners established objectives and performance indicators for all 10 of the CBSI activities in our sample and use these indicators to measure activity progress. Table 3 includes examples of the types of indicators established for USAID activities in our sample. In addition to establishing performance indicators, USAID and its implementing partners are using these indicators to measure the progress of CBSI activities. We found that implementing partners for nearly all of the activities in our sample had submitted progress reports to USAID that used the performance indicators to measure progress and identify challenges in achieving the activities’ goals. State and its implementing partners also established objectives and performance indicators for all 15 of the CBSI activities that we reviewed. See table 4 for examples of the types of indicators established for INL activities in our sample. INL Cannot Ensure the Reliability of Its Program Monitoring Data but Is Taking Steps to Address Weaknesses in Western Hemisphere Program Monitoring INL cannot ensure the reliability of its CBSI program monitoring data but is taking steps to improve its ability to consistently collect and store such data for its activities in the Western Hemisphere, including CBSI activities. We have previously reported that effective program monitoring of foreign assistance requires quality data for performance reporting. Specifically, leading practices for monitoring of foreign assistance activities include development of objectives and relevant performance indicators, procedures for assuring quality of data on performance indicators, and submission of performance reports. According to INL officials, in the absence of a centrally available data management system, program monitoring data is collected and maintained at each embassy. As a result, compiling and providing program monitoring data is time-consuming. For example, when we requested a list of all completed and ongoing INL-funded CBSI activities from fiscal years 2012 through 2017, INL responded that it would take several months to compile that information. Further, INL officials told us that they cannot ensure the reliability of their program monitoring data because of questions about the comparability of data collected across embassies. During the course of our work, INL officials at headquarters and overseas told us that program monitoring is conducted differently in every embassy, and program monitoring data is not defined or recorded in a standardized manner. These variations can result in discrepancies in how program performance data is defined and collected. For example, INL officials explained that in order to collect drug seizure information that can be analyzed across countries, the data needs to be collected in the same units of measurement and over the same time period in each country, but currently, they are not. According to INL, absent a standardized data collection process, it is difficult to track data trends across programs. INL has expressed concerns about its program monitoring and an inability to centrally collect reliable program monitoring data. In 2015, an independent evaluation of INL’s CBSI activities noted that lack of monitoring information hinders INL’s efforts to link assistance directly to goals, objectives, and results laid out in the CBSI Results Framework. It recommended that INL prioritize improving internal program monitoring capacity. INL’s Functional Bureau Strategy, released in 2018, states that INL’s program monitoring efforts are often constrained by the availability of reliable data. In response to these concerns about program monitoring, the INL office for Western Hemisphere Programs contracted with a private firm in 2017 to improve its program monitoring capabilities by creating new performance indicators meant to standardize data collection across INL’s programs in the Western Hemisphere and better capture the impact of INL’s assistance. The contract also included the creation of a centralized data management system for collecting and storing the program monitoring data associated with the performance indicators. According to INL officials and progress reports submitted by the contractor, some progress has been made. To date, the contractors have been studying the availability of data, reviewing existing performance indicators, and proposing new indicators. The contractors have been considering options for designing and building the centralized data management system. However, INL officials acknowledge that data challenges remain, such as the issue of how to collect standardized data from each of the embassies and how to build a functioning data management system that is compatible with State requirements. As of October 2018, according to INL officials, the characteristics of the centralized data management system had not yet been determined, and consequently, they are uncertain what capabilities the final data management system will have. Therefore, it is unclear whether the system will allow for the consistent collection and storage of reliable program monitoring data for all CBSI activities and the ability to distinguish these data from those of other Western Hemisphere activities. In the absence of centrally-available, reliable data for CBSI activities, INL may continue to struggle with effective program monitoring for these activities. Conclusions The Caribbean region faces a variety of economic and security challenges that jeopardize the region’s economic growth and development. Because of close societal ties and geographic proximity, these challenges also threaten U.S. security. CBSI was created to respond to these threats—to provide mutually beneficial assistance that would increase citizen security for residents of the Caribbean region and bolster economic opportunities. However, while U.S. agencies have allocated more than $560 million to CBSI since 2010, they cannot attest to the initiative’s success or failure. State’s WHA, which plays the coordinating role for CBSI, has not established an initiative-wide planning and reporting mechanism that ensures CBSI activities are being coordinated to maximize the impact of assistance and prevent overlap, and that provides a means for assessing overall progress. Without such a mechanism, the ability to demonstrate the efficacy of the initiative, and to emphasize positive results that have been achieved, is limited. Although USAID and State have established objectives and performance indicators for the CBSI activities we reviewed, State does not have a process for centrally collecting and storing reliable program monitoring data for the activities it funds through CBSI, particularly those managed by INL. While INL is taking steps to address these challenges by improving program monitoring across its activities in the western hemisphere, without reliable performance data specific to CBSI, State cannot report comprehensively or accurately on its CBSI activities or track data trends across countries. Recommendations for Executive Action We are making the following two recommendations to State: The Secretary of State should, in consultation with USAID and other stakeholders as appropriate, create an initiative-wide planning and reporting mechanism for CBSI that includes the ability to monitor, evaluate, and report the results of their collaborative efforts (Recommendation 1). The Secretary of State should ensure that INL’s Office of Western Hemisphere Programs develops and implements a data management system for centrally collecting reliable program monitoring data for all INL- funded CBSI activities through its current program monitoring contract or by some other means (Recommendation 2). Agency Comments and Our Evaluation We provided a draft of this report to State and USAID, DOD, the Department of Justice, and the Department of Homeland Security for review and comment. In its comments, reproduced in appendix IV, State concurred with our two recommendations. State noted that it plans to develop an updated Results Framework for CBSI that will provide the basis for initiative-wide planning and reporting. State also noted that INL’s Office of Western Hemisphere Programs is working through its existing monitoring and evaluation contract to improve centralized data collection and is developing plans for an enhanced data management system that will facilitate the collection and management of both strategic and implementer-reported data. In addition, State reported that INL is developing complementary bureau-wide monitoring and evaluation guidance and procedures to ensure the consistency and reliability of collected data across INL programs, which include CBSI activities. USAID also provided written comments, which we have reproduced in appendix V. State, USAID, DOD, and the Department of Homeland Security provided technical comments, which we have incorporated as appropriate. The Department of Justice reviewed the report but did not provide comments. We are sending copies of this report to the appropriate congressional committees, the Secretary of State, the Administrator of USAID, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7141 or groverj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Appendix I: Objectives, Scope and Methodology We were asked to review security assistance to the Caribbean region provided through the Caribbean Basin Security Initiative (CBSI). In this report we (1) provide information on U.S. funding for CBSI activities, (2) examine the extent to which the U.S. Department of State (State) and U.S. Agency for International Development (USAID), in conjunction with other agencies, have implemented a planning and reporting process for CBSI, and (3) examine the extent to which State and USAID have established objectives and performance indicators to measure the progress of their CBSI activities. To provide information on U.S. funding for CBSI, we obtained State and USAID data for fiscal years 2010 through 2018. We analyzed these data to determine allocations, unobligated balances, unliquidated obligations, and disbursements by fiscal year, funding account, and country. We compared the data to those previously reported to identify inconsistencies, and interviewed State and USAID officials. We determined these data were sufficiently reliable for reporting allocations, unobligated balances, unliquidated obligations, and disbursements by fiscal year, funding account, and country. To obtain additional detail on the types of assistance provided by the United States, we reviewed activity documentation; interviewed State and USAID officials in Washington, D.C. and traveled to Barbados, the Dominican Republic, and Jamaica to meet with State, USAID, and implementing partner officials. We also observed CBSI activities in these countries. We selected these countries for fieldwork because they were among the countries receiving the largest amount of CBSI funding and the embassies there included CBSI program officials from State and USAID. The findings from these countries are not generalizable to all CBSI countries. To determine the extent to which State and USAID have implemented a planning and reporting mechanism for CBSI, we obtained relevant CBSI planning and reporting documents, including State Bureau of International Narcotics and Law Enforcement Affairs (INL) country and regional implementation plans and documents related to the Caribbean-U.S. Security Cooperation Dialogue; and strategy documents such as Integrated Country Strategies, Country Development Cooperation Strategies, and Functional Bureau Strategies. We also assessed relevant Performance Plans and Reports for Caribbean countries and the Western Hemisphere and interviewed State officials to determine how information on CBSI activities is aggregated and reported on a country level and initiative-wide basis. In addition, we interviewed relevant State and USAID officials in Washington, D.C. and in Barbados, the Dominican Republic, and Jamaica, about their planning processes for CBSI activities. We compared the planning and reporting procedures in place to the key elements for effectively aligning foreign assistance strategies in situations where multiple agencies work together to deliver foreign assistance. To determine the extent to which State and USAID have established objectives and performance indicators to measure the progress of CBSI activities, we selected three case study countries—Barbados, the Dominican Republic, and Jamaica. We selected these three countries because they receive the greatest amount of CBSI funding and because they have program officials from State and USAID in their embassies. We requested lists of all ongoing and completed CBSI activities from State and USAID for fiscal years 2012 through 2017 and used the lists to select a non-generalizable sample of activities, 15 implemented by State and 10 by USAID. The activities were chosen to provide a range of implementing partners, types of activity, and location. We requested State and USAID documentation related to the activities in our sample, including applications for funding, contracts, agreements, program monitoring and progress reports, financial reports, and evaluations. We reviewed the documentation to assess the performance management practices in place for these activities, as well as country-level and regional-level reporting related to the activities—specifically focusing on the use of program objectives and performance indicators, which are used to set and measure progress toward program goals. The objectives and performance indicators in place for these activities do not represent those in place for all CBSI activities, but offer illustrative examples. We compared the performance management practices in place for the sample activities to State and USAID policies. For the Technical Assistance Field Team (TAFT) program implemented by the Department of Defense (DOD) and the U.S. Coast Guard on behalf of State’s Bureau of Political-Military Affairs, we reviewed quarterly reports between fiscal years 2014 and 2018 for performance management information. The TAFT program is designed to provide technical assistance to enhance operational readiness and maintenance of equipment used by CBSI countries. The quarterly reports include articulation of objectives, descriptive information on the support TAFT members provided during each visit, assessments of host country capabilities, and details on where, when, and how funds were expended. While this information is not reported in the same manner as State’s and USAID’s performance data, we determined it appropriate to treat the information provided in the TAFT quarterly reports as comparable to the setting of objectives and performance indicators as generally carried out by State and USAID. We also interviewed State, USAID, DOD, the Department of Justice, the Department of Homeland Security, and other implementing partner officials in Washington, D.C.; Barbados; the Dominican Republic; and Jamaica; and conducted site visits in these countries to determine the types of performance indicators tracked and reported on for each activity. We conducted this performance audit from November 2017 to February 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Funding Data Tables To demonstrate how funding for Caribbean Basin Security Initiative (CBSI) activities have been allocated, obligated and disbursed, we are providing a status of CBSI funds as of November 2018. Tables 5-9 below show CBSI funding from the International Narcotics Control and Law Enforcement (INCLE); Economic Support Fund (ESF); Nonproliferation, Anti-terrorism, Demining, and Related Programs (NADR); and Development Assistance (DA) accounts. These tables illustrate, by year of appropriation, how U.S agencies have allocated, obligated, and disbursed funds for activities in CBSI partner countries. Specifically, the tables include unobligated balances—that is, portions of allocated funds that have not yet been obligated—and unliquidated obligations (i.e. obligated balances)—that is, amounts already incurred for which payment has not yet been made. Appendix III: Status of Caribbean Basin Security Initiative Foreign Military Financing Account Funds Table 10 below provides the status of Caribbean Basin Security Initiative (CBSI) Foreign Military Financing (FMF) funds as of November 2018. The presentation of FMF allocations and commitments in this table is different from presentations on allocations, obligations, and disbursements of the other CBSI accounts in tables 5-9 in appendix II because FMF funds are budgeted and tracked in a different way. The Defense Security Cooperation Agency (DSCA) and the Defense Financing and Accounting Service (DFAS) are responsible for the financial systems that account for FMF funds as well as for tracking the implementation and expenditure of those funds. According to DSCA officials, FMF funds are obligated on apportionment. Further, DSCA’s system can track only uncommitted and committed amounts, not unliquidated obligations or disbursements, of FMF funds. DFAS tracks obligations and disbursements using the Defense Integrated Finance System; however, there is no direct link between the DSCA and DFAS systems and the DFAS system does not track funding for specific initiatives, such as CBSI. Appendix IV: Comments from the Department of State Appendix V: Comments from the U.S. Agency for International Development Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact: Staff Acknowledgments: In addition to the contact named above, Thomas Costa (Assistant Director), Jennifer Young, Martin Wilson, Peter Choi, Debbie Chung, Benjamin Licht, Martin de Alteriis, Neil Doherty, and Mark Dowling made key contributions to this report.
Why GAO Did This Study The Caribbean region, which shares geographic proximity and common interests with the United States, faces high rates of crime and violence. In 2010, the United States and Caribbean countries formally launched CBSI, which aims to increase citizen safety. GAO was asked to examine U.S. assistance through CBSI. This report (1) discusses U.S. funding for CBSI activities, (2) examines the extent to which there is a planning and reporting process for CBSI, and (3) examines the extent to which State and USAID have established objectives and performance indicators to measure progress of their CBSI activities. GAO analyzed State and USAID data; assessed government strategies and performance reports; selected a non-generalizable sample of 25 CBSI activities and analyzed State and USAID documentation related to those activities; interviewed relevant officials; and conducted fieldwork in Barbados, Dominican Republic, and Jamaica, which are the countries generally receiving the most CBSI funding. What GAO Found U.S. agencies have allocated more than $560 million for the Caribbean Basin Security Initiative (CBSI) from fiscal years 2010 through 2018 for activities related to the three pillars of CBSI—reduce illicit trafficking (such as in narcotics and firearms), improve public safety and security, and promote social justice. For example, State Department's (State) Bureau of International Narcotics and Law Enforcement Affairs (INL) has ongoing activities such as advisory programs and equipment procurements, while the U.S. Agency for International Development (USAID) has activities aimed at increasing economic opportunities for at-risk youth and improving the skills of prosecutors. The U.S. government has undertaken some planning and reporting of CBSI activities, but State has not created an initiative-wide planning and reporting mechanism. Agencies individually set strategic goals and priorities with CBSI countries and plan and report on their CBSI activities on a country-specific basis. However, State has not created an initiative-wide planning and reporting mechanism that facilitates interagency coordination or establishes consistent performance indicators across agencies, countries, and activities—key elements for effectively aligning foreign assistance strategies. Without such a planning and reporting mechanism, overall progress of the initiative cannot be assessed. State and USAID have established objectives and performance indicators for selected CBSI activities, and INL is taking steps to improve identified weaknesses in its program monitoring. State and USAID had established objectives and performance indicators for the 25 activities in our sample. However, INL cannot ensure the reliability of its program monitoring data because collection and maintenance of this data is conducted differently in each country and there is no centralized data storage system. INL recently contracted to improve and standardize its program monitoring data for Western Hemisphere activities, but according to INL officials, data challenges remain—in particular, how to collect standardized data from each of the embassies and how to build a data management system that is compatible with State requirements. Without reliable data, INL may continue to struggle with program monitoring of CBSI activities. What GAO Recommends GAO recommends that State (1) create an initiative-wide planning and reporting mechanism for CBSI that includes the ability to monitor, evaluate, and report the results of collaborative efforts, and (2) ensure that INL develops and implements a data management system for centrally collecting reliable CBSI data. State agreed with the recommendations, noting that it plans to develop an updated Results Framework for initiative-wide planning and reporting and to improve centralized data collection through an enhanced data management system.
gao_GAO-18-64
gao_GAO-18-64_0
Background EEOC Data EEOC data we obtained and analyzed showed that financial services firms employed more than 3.2 million people in 2015. EEOC requires employers to use the North American Industry Classification System (NAICS) to classify their industry. Under this system, the financial services industry includes the following five sectors: credit intermediation and related activities (banks and other credit institutions), which include depository credit institutions— commercial banks, thrifts (savings and loan associations and savings banks), and credit unions; and nondepository credit institutions, which extend credit in the form of loans and include federally sponsored credit agencies, personal credit institutions, and mortgage bankers and brokers; funds, trusts, and other financial vehicles (funds and trusts), which include investment trusts, investment companies, and holding companies; securities, commodity contracts, and other financial investments and related activities (securities and other activities), which is composed of a variety of firms and organizations that bring together buyers and sellers of securities and commodities, manage investments, and offer financial advice; insurance carriers and related activities (insurance), which include carriers and insurance agents that provide protection against financial risks to policyholders in exchange for the payment of premiums; and monetary authorities, which include central banks. EEOC requires private employers subject to Title VII of the Civil Rights Act of 1964 with 100 or more employees and all federal contractors who have 50 or more employees and meet certain other requirements to annually submit data on the racial/ethnic and gender characteristics of employees by various occupations for a broad range of industries, including financial services. Employers are required to submit these data to EEOC by submitting an EEO-1 report. In addition to providing EEOC with data on the demographic characteristics of employees as of a specific point in time, employers must also report the number of employees working at headquarters and any additional offices, the primary industry type of the organization, and the numbers of employees in two different categories of management positions. Beginning in 2007, EEOC changed its requirements on the reporting of data on managers. More specifically, employers were required to report data for senior-level management positions rather than combining data on senior-level managers with data for first- and mid-level management positions as had been the practice prior to 2007. Since 2007, employers are to review EEOC guidance describing the two management positions and determine how their firm’s job positions fit into these classifications. Senior-level managers include, for example, chief executive officers, chief financial officers, and managing partners. The first- and mid-level management category includes (1) middle managers that report to senior managers and typically lead major business units and (2) managers who report to middle managers and oversee day-to-day operations, such as team or branch managers. Additionally, EEOC changed its practices for collecting certain racial/ethnicity information. The EEO-1 form was changed in 2007 to divide “Asian or Pacific Islander” into two separate categories, “Asian” and “Native Hawaiian or other Pacific Islander.” Also, EEOC adopted a two-question format for collecting ethnicity data. Under this format, employers should first ask employees their Hispanic or Latino status, and then ask those employees who do not identify as Hispanic or Latino for their race. EEOC proposed revisions to the EEO-1 form in 2016, which would have required employers with 100 or more employees to report summary pay data in their EEO-1 report. The Office of Management and Budget (OMB) approved the revision in September 2016. In August 2017, OMB issued a memorandum suspending the pay-related data collection aspects of the EEO-1 form. According to the memorandum, since approving the revised EEO-1 form, the relevant circumstances related to the data collection had changed and the burden estimates provided by EEOC in the original filing were materially in error. As a result, the previously approved EEO-1 form without the pay-related data requirements will remain in effect. Financial Services Industry and Diversity Practices We previously reported on the challenges faced by the financial services industry in promoting and retaining a diverse workforce. In 2010, we reported that diversity in management in the financial services industry did not change substantially from 1993 through 2008 and that diversity in senior positions was limited. We also found that without a sustained commitment among financial services firms to overcoming challenges to recruiting and retaining minority candidates, limited progress would be possible in fostering a more diverse workplace. Subsequently, in 2013, we found that following the 2007–2009 financial crisis, diversity in management in the financial services industry did not change substantially from 2007 through 2011 and that diversity in senior positions remained limited. We also found that women generally represented 45 percent of management-level positions each year, from 2007 through 2011. Additionally, our 2013 report noted that practices that can support diversity include sponsorships (where an executive acts as a guide to help an employee navigate the organization) and efforts to address unconscious bias in promotions. In a January 2005 report, we defined diversity management as a process intended to create and maintain a positive work environment that values individuals’ similarities and differences, so that all can reach their potential and maximize their contributions to an organization’s strategic goals and objectives. We also identified a set of nine leading diversity management practices that should be considered when an organization is developing and implementing diversity management. They are (1) commitment to diversity as demonstrated and communicated by an organization’s top leadership; (2) the inclusion of diversity management in an organization’s strategic plan; (3) diversity linked to performance, making the case that a more diverse and inclusive work environment could help improve productivity and individual and organizational performance; (4) measurement of the impact of various aspects of a diversity program; (5) management accountability for the progress of diversity initiatives; (6) succession planning; (7) recruitment; (8) employee involvement in an organization’s diversity management; and (9) training for management and staff about diversity management. In 2013, we reported that industry representatives confirmed that these nine practices are still relevant. Since our 2005 report, researchers and the federal government have recognized that a focus on inclusion in the workplace is an important component of creating and sustaining a diverse workforce. For example, the Office of Personnel Management notes that optimal performance is based on both diversity and inclusion, which it defines as a set of behaviors (culture) that encourages employees to feel valued for their unique qualities and experience a sense of belonging. Research on Potential Benefits of Workforce Diversity Research discusses a number of reasons why workforce diversity may be beneficial to businesses. For example, two studies summarized other research that found that diversity can bring new voices and perspectives into conversations about business strategies, such as developing opportunities in unserved markets. Also, a diverse workforce can help managers understand and address the needs of a demographically diverse customer base. That is, employees who are demographically similar to customers are likely to have an easier time understanding customer preferences and how they change over time. Additionally, a diverse workforce can stimulate a wider range of creative decisions. Researchers have noted that minority opinions stimulate creativity and divergent thought, and that creativity and innovation are enhanced when a diverse workforce is employed. Research on the effects of workforce diversity on financial performance has been mixed. For example, a 2003 report summarized the results and conclusions reached in four separate studies of the relationships between race and gender diversity and financial performance. The report concluded that race and gender diversity had no direct positive or negative influence on financial performance. A 2011 report that summarized this and other research found that researchers continue to put forth conflicting results regarding the business benefits of workforce diversity. In the authors’ opinion, the goals of workforce diversity programs should be broad, and not just focused on the organization’s financial performance. Management-Level Diversity Trends Showed Marginal or No Increase from 2007 through 2015 Representation of minorities at the overall management level increased by 3.7 percentage points from 2007 through 2015 and their representation among senior-level managers increased by 1.7 percentage points during this time. Women’s representation at the overall management level has remained at about 45 percent from 2007 through 2015. Among the various sectors of the financial services industry, the insurance sector has consistently had the highest proportion of women in management positions while the banks and other credit institutions sector has consistently had the highest proportion of racial/ethnic minorities in management. As the size of financial services firms increase (by number of employees), the representation of minorities in overall management increased and the representation of women in overall management was generally the same. Additionally, management-level diversity in the financial services sector has similarities and differences compared to other sectors. Management-Level Representation of Minorities Increased Marginally since 2007, but Representation Varied by Minority Group Trends in Overall Management At the overall management level, minority representation increased in the financial services sector, though representation varied by individual minority groups. More specifically, the representation of minorities in management increased by 3.7 percentage points from 2007 through 2015 according to EEOC data, (see fig. 1). This increase shows a continued upward trend from our 2006 report—the first of a series of reports we have issued on trends in the financial services industry—in which data showed that management-level representation by minorities increased from 11.1 percent to 15.5 percent from 1993 through 2004. Since 2007, Asians had the largest gains, increasing their representation among managers from 5.4 percent to 7.7 percent. Hispanics made smaller gains. In contrast, the proportion of African-Americans in management positions decreased from 6.5 percent to 6.3 percent. From 2007 through 2015, minorities’ representation among first- and mid- level managers increased by 3.7 percentage points (see fig. 2). Minorities’ representation among senior-level managers increased by 1.7 percentage points during this time. As previously noted, EEOC splits management into two categories: (1) first- and mid-level officials and managers and (2) executive and senior-level officials and managers. First- and mid-level management positions may serve as an internal pipeline in an organization through which minority candidates could move into senior-level management positions. In 2015, representation of minorities in first- and mid-level management positions was 22.4 percent compared to 12.3 percent of minorities in senior-level management positions. Among first- and mid-level managers, the representation of Asians increased by 2.6 percentage points from 2007 through 2015, while representation changed by less than 1 percentage point each for Hispanics and African-Americans (see fig. 3). Among senior-level managers, the representation of each racial/ethnic group changed by less than 1 percentage point during this time. As previously mentioned, racial and ethnic groups’ workforce participation is projected to grow at varying rates. For example, from 2014 through 2024 labor force participation is expected to increase by 10.1 percent for African-Americans, 23.2 percent for Asians and 28 percent for Hispanics, according to the Bureau of Labor Statistics. In contrast, labor force participation among white persons is expected to increase by 2 percent. Management-Level Representation of Women and Men Has Been Unchanged since 2007, with Representation of Minority Women and Men Increasing Marginally Trends in Overall Management Representation of women and men at the overall management level in the financial services industry has remained unchanged from 2007 through 2015, with women representing about 45 percent of managers and men representing about 55 percent over time. In 2006, we similarly reported that from 1993 through 2004, women represented from about 43 percent to 46 percent of managers. The proportion of minority women in overall management increased by 1.5 percentage points from 2007 through 2015 while decreasing by 1.5 percentage points among white women (see fig. 4). During the same time period, representation of minority men in overall management increased by 2.2 percentage points while decreasing by 2.3 percentage points for white men. However, representation of white men remained significantly higher at 44.5 percent in 2015 compared to white women at 34.4 percent, minority women at 10.7 percent, and minority men at 10.3 percent. Representation of specific racial/ethnic groups in the financial services sector from 2007 through 2015 varied by gender (see fig. 5). For example, among minority women, African-American women consistently had the highest representation in management, representing from 4.1 percent to 4.0 percent of managers. Hispanic and Asian women had similar representation in management positions over time. More specifically, Hispanic women represented from 2.5 percent to 2.9 percent of managers and Asian women represented from 2.3 percent to 3.1 percent of managers. In contrast, among minority men, Asian men consistently had the highest representation in management, representing from 3.1 percent to 4.6 percent of all managers from 2007 through 2015. African-American and Hispanic men had similar representation in management positions during this time period. More specifically, African- American men represented from 2.3 percent to 2.4 percent of managers and Hispanic men represented from 2.3 percent to 2.6 percent of managers. Representation of women among first- and mid-level managers and senior-level managers was around 48 percent and about 29 percent, respectively, from 2007 through 2015. Among first- and mid-level management positions, the representation of white women decreased by 2 percentage points from 2007 through 2015 (see fig. 6). Also during this time, the representation of white women in senior-level management positions decreased by 0.9 percentage points. For minority women, representation in first- and mid-level management positions increased by 1.6 percentage points and representation in senior-level management positions increased by 0.3 percentage points from 2007 through 2015. For men, the largest changes over time were in the first- and mid-level management positions. More specifically, from 2007 through 2015, representation of white men in first- and mid-level management decreased by 1.8 percentage points and representation of minority men in first- and mid-level management increased by 2.2 percentage points. Among senior-level managers, representation of white men decreased by 0.9 percentage points and increased by 1.5 percentage points among minority men from 2007 through 2015. For additional information on the representation of minority women and men in each management position by race/ethnicity, see appendix II. Certain Financial Sectors Are More Diverse Than Others and Representation of Minorities Increased with Firm Size Trends by Financial Sectors The representation of minorities in overall management positions varied by sector (see fig. 7). EEO-1 data for the financial services industry include the following four sectors: banks and other credit institutions, funds and trusts, securities and other activities, and insurance. For example, the representation of minorities in overall management positions was consistently the greatest in the banks and other credit institutions sector and lowest in the insurance sector. Minorities’ representation in overall management increased in four sectors of the financial services industry from 2007 through 2015. For example, the representation of minorities in the banks and other credit institutions sector increased by 3.1 percentage points and the representation of minorities in the insurance sector increased by 4.2 percentage points. The representation of women in overall management also varied by sector. As shown in figure 8, the insurance sector consistently had the highest proportion of women in management positions, followed by banks and other credit institutions, funds and trusts, and securities and other activities. From 2007 through 2015, the proportion of women in management positions decreased in each sector except for the insurance sector where it increased by 1.9 percentage points. The proportions of Hispanics, Asians, and Other in overall management increased from 2007 through 2015 in each of the four financial sectors we reviewed, and decreased for African-Americans in all but the insurance sector (see fig. 9). Among racial/ethnic groups, Asians generally experienced the greatest increases in management-level representation. For example, from 2007 through 2015, management-level representation of Asians in the securities and other activities sector increased by 3.5 percentage points while it increased by 0.8 percentage points for Hispanics, increased by 0.6 percentage points for Other, and decreased by 0.8 percentage points for African-Americans. However, in the insurance sector, African-Americans had the highest percentage representation compared to other minority groups and increased from 6.7 percent in 2007 to 7.2 percent in 2015. The representation of minorities in overall management increased as firm size (by number of employees) increased (see fig. 10). In 2007, the representation of minorities in management was nearly 5 percentage points greater in firms with 5,000 or more employees compared to firms with 100–249 employees. In 2015, by comparison, the representation of minorities in overall management was about 6 percentage points greater in the largest category of firms (5,000 or more employees) compared to the smallest (100–249 employees). Research suggests that larger organizations may have greater capacity to address workforce diversity. Researchers also note that large organizations tend to make greater efforts to prevent workplace discrimination against women and racial/ethnic minorities because they have direct legal obligations. Additional information on representation of specific racial/ethnic groups in management positions across firm size can be found in appendix II. As shown in figure 11, the representation of women in management positions was generally the same across firm size in 2007 and 2015. For example, in 2007 women represented from nearly 45 percent to nearly 46 percent of the managers in financial services firms of varying sizes. Similarly, in 2015 women represented from nearly 44 percent to almost 47 percent of the managers in financial services firms of varying sizes. Financial Services Sector Trends Have Similarities and Differences Compared to Other Sectors Representation of minorities increased from 2007 through 2015 in the financial services sector, the professional services sector, and the overall private sector at both the senior-level and the first- and mid-level of management, as shown in figure 12. The professional services sector includes jobs in legal services, accounting, consulting, and advertising, among other services. Among first- and mid-level managers, however, the representation of minorities increased at a higher rate for the professional services sector. More specifically, from 2007 through 2015, minorities’ representation among first- and mid-level managers increased by 7.5 percentage points in the professional services sector. In comparison, minorities’ representation among first- and mid-level managers in the financial services sector and the overall private sector increased by 3.7 and 3.8 percentage points, respectively, during this time. Among senior-level managers, representation of minorities fluctuated from 2007 through 2015 in all three sectors. However, minorities’ representation increased the most—by 2.5 percentage points—in the professional services sector, compared to the financial services and overall private sector, which increased by 1.7 and 1.4 percentage points, respectively. The financial services sector has generally had a greater proportion of women in various management positions compared to the overall private sector (excluding the financial services sector) and the professional services sector. As shown in figure 13, from 2007 through 2015 women represented about 48 percent of the first- and mid-level management positions in the financial services sector. In comparison, women’s representation among first- and mid-level managers in other sectors was smaller. For example, women represented 36.7 percent of the first- and mid-level managers in the professional services sector in 2015. Among senior-level managers, the representation of women in financial services was slightly higher than their representation in the overall private sector from 2007–2010, after which time their representation in each sector was generally within 1 percentage point. From 2007 through 2015, women’s representation among senior-level managers in financial services was generally greater than their representation among senior-level managers in the professional services sector. Potential Talent Pools for Financial Services Positions, Including Management, Are Diverse Potential employees for the financial services industry who could be an external pool for becoming managers can come from a wide range of academic and professional backgrounds. Undergraduate or graduate degrees are an important consideration for employment according to staff we spoke with at financial services firms. Representatives from three financial services firms told us that while graduates with Master of Business Administration (MBA) degrees are still an important external talent pool, firms have broadened their recruitment efforts and seek students with a variety of degrees. About one-third of the external pool of potential talent for financial services, that is, those obtaining undergraduate or graduate degrees, were racial/ethnic minorities from 2011 through 2015 (see fig.14). Rates of bachelor’s degree attainment by racial/ethnic minorities increased from 29.4 percent in 2011 to 33.9 percent in 2015. During the same time period, rates of master’s degree attainment increased by similar amounts, from 28.8 percent to 33 percent, and MBA attainment increased from 35.6 percent to 39.2 percent. As previously noted, the proportion of managers in the financial services industry who were racial or ethnic minorities increased from 17.3 percent in 2007 to 21 percent in 2015, which is lower than the rates of bachelor’s, master’s, and MBA degree attainment for these groups across all years. Among the potential external talent pool of minority women and minority men, educational attainment has consistently increased over time, and women have generally obtained a higher percentage of undergraduate or graduate degrees compared to men. For example, from 2011 through 2015, rates of bachelor’s degree attainment increased by at least 2 percentage points each for minority women and minority men, and minority women consistently earned a greater proportion of bachelors’ degrees (see fig. 15). Similarly, the proportions of masters and MBA degrees earned from 2011 through 2015 increased for minority women and minority men. During this time frame, minority women consistently earned a greater proportion of master’s and MBA degrees compared to minority men. Additional information about educational attainment among the potential external talent pool of women and men can be found in appendix IV. A majority of the external pool of potential talent for the financial services industry, that is, those obtaining undergraduate or graduate degrees, have been women in recent years (see fig. 16). From 2011 through 2015, women consistently earned about 58 percent of bachelors’ degrees, just over 60 percent of masters’ degrees, and about 45 percent of the MBA degrees. As we previously discussed, women have generally represented about 45 percent of overall management in the financial services industry. Two of the nonmanagement job categories in the financial services sector—professional and sales positions—are considered to be the industry’s potential “internal pipeline,” which comprise staff that could potentially move into management positions. Professional positions can include credit and financial analysts, personal financial advisors, financial examiners, and loan officers; sales positions can include those in securities, commodities, financial services, and insurance sales agents. EEOC data are derived from annual reports that show firms’ workforce composition in a single point of time and therefore do not allow for analysis of the extent to which firms promote staff internally. However, the data do provide some insights into the potential internal pipeline. Representation of racial/ethnic minorities in professional and sales positions has changed over time, but has generally been greater than their representation in overall management positions (see fig. 17). More specifically, EEOC data show that racial/ethnic minorities generally comprised about 25 percent of the professional positions from 2007 through 2011, and then increased to nearly 28 percent in 2015. In contrast, the representation of racial/ethnic minorities in sales positions decreased during the 2007–2009 financial crisis, and then increased from nearly 23 percent in 2011 to nearly 26 percent in 2015. As previously noted, minorities have represented from 17 percent to 21 percent of overall management in the financial services industry from 2007 through 2015. See appendix IV for additional information on the potential internal pool for management positions in the financial services industry. Representation of women in professional positions in the financial services industry has generally been greater than women’s representation in overall management (see fig. 18). For example, from 2007 through 2015, the proportion of women in professional positions has generally been just over 50 percent. As previously noted, during this time frame women consistently represented about 45 percent of overall management. The percentage of women in sales positions within the financial services industry has generally been lower, at about 40 percent. Industry and Other Sources Describe Ongoing Workforce Diversity Challenges and Practices to Address Them Representatives from financial services firms and other stakeholders described many of the same challenges in recruiting and retaining women and racial/ethnic minorities as we have previously reported, including negative perceptions of the financial services industry that might discourage potential candidates. Practices that financial services firms use to address these challenges include broadening recruitment efforts, establishing relationships with student groups and professional organizations, and providing training on unconscious bias. Representatives from all of the financial services firms we met with agreed on the importance of analyzing data on the demographic characteristics of their employees. Some firm representatives noted that by assessing employee data they can identify trends that may need to be addressed. However, representatives and other stakeholders differed on the benefits of making firm-level information on employee diversity publicly available. Firms and Other Sources Cite a Variety of Recruiting Challenges and Practices That May Help Address Them Representatives from financial services firms and organizations that advocate for women or racial/ethnic minorities described a variety of challenges to recruiting a diverse workforce for the financial services sector, many of which we have described in previous reports on the topic. For example, representatives from several financial services firms stated that negative perceptions of the industry could limit potential candidates’ interest in the field. Additionally, representatives of an organization that advocates for workforce diversity stated that women and minorities may not seek employment in the financial sector due to concerns about the industry’s reputation or a lack of awareness of career paths in the industry. Representatives from some financial services firms told us that it is challenging to get firm leadership on board with recruiting at a broad group of schools, rather than a small number of elite universities. Representatives from three organizations that advocate for women or minorities similarly observed that some financial services firms focus on elite universities. Also, some financial services firm representatives told us that there is a great deal of competition for diverse talent and that financial services firms are increasingly competing with technology firms for talent. Representatives from two firms also stated that it is challenging to recruit diverse staff to work in some geographic locations. Reports on workforce diversity echo some of the recruiting challenges that we heard from financial firm representatives. For example, a 2012 consulting firm report on women in senior management notes that at the entry-level businesses viewed as male-dominated tend to attract fewer women. This report also states that sometimes companies have a view that positions requiring long hours will not suit women. A 2012 study on women’s job choices found that in financial services, women are significantly less likely than men to apply for financial advisory and trading jobs and more likely to apply for jobs in general management—most notably internal finance and marketing. A 2014 consulting firm report on diversity in the leadership of companies in the United Kingdom, Canada, Latin America, and the United States found a number of barriers to the recruitment of all diversity groups (including women as well as racial/ethnic groups). These barriers include the lack of visible support from leadership and inadequate collection and use of data on the advantages of more diverse organizations. Additionally, a 2016 consulting firm report on women in financial services in 32 countries noted that a majority of asset managers who were interviewed held the view that certain jobs in financial services, such as asset management, may deter qualified women from applying, as may a lack of knowledge about the industry among graduate students. Financial firm representatives and other stakeholders we spoke with, and research we reviewed described a variety of practices that they believe or have found to be effective for recruiting women and racial/ethnic minorities. These practices include the following. Engaging in broad-based recruiting. Representatives from three firms stated that they are increasingly hiring and interested in recruiting students from a variety of academic disciplines, such as liberal arts or science and technology. For example, representatives from one firm explained that they are interested in candidates with critical thinking skills, and that technical skills can be taught to new employees. Additionally, representatives from several firms noted the importance of recruiting at a broad group of schools, not just a small number of elite universities. Establishing relationships with student and professional organizations. Most financial firm representatives told us that an effective strategy for recruiting diverse students is to establish relationships with student organizations representing diverse groups. Representatives from one firm explained that working with student groups helps expose diverse students to careers in financial services. Additionally, to help recruit women and minorities who may already have graduated from college or graduate school, representatives of most financial firms and two trade groups described establishing relationships with professional organizations that represent women and minorities. Intentionally recruiting diverse candidates. Representatives from two financial services firms and two organizations that advocate for the financial services industry noted that firms should intentionally seek out diverse candidates. For example, representatives from one firm discussed the importance of including diversity in a firm’s recruiting strategy and establishing relationships with schools and organizations that can increase women’s and minorities’ exposure to financial services. Offering programs to increase awareness of financial services. Several financial firm representatives told us that they establish relationships with high school students to expose diverse students to the financial services field. For example, representatives from one firm described a program that pairs high school students with a mentor from the firm. Two organizations that advocate for the financial services industry also noted that it is helpful for financial services firms to establish relationships with high schools to educate young students about the field. A 2016 consulting firm report on women in financial services organizations in 32 countries found that a majority of asset managers who were interviewed thought it was important for financial services firms to educate students about careers available in financial services. The report noted that more on-campus education and public relations work could help attract women to the field. Firms and Other Sources Note Retention Challenges and Practices That May Help Retain and Promote Diverse Employees Reports on workforce diversity, representatives from financial services firms, and other stakeholders discussed several challenges to retaining women and racial/ethnic minorities, several of which we have previously reported. Representatives of three financial services firms and two organizations that advocate for the financial services industry told us that it is challenging to retain women and minorities at organizations that lack women and minorities in management positions. Additionally, two former employees of large financial services firms, both racial/ethnic minorities, told us that there are fewer mentors or role models for women and racial/ethnic minorities in firms that have fewer women and minorities in leadership positions. A 2012 consulting firm report on women in senior management reported that women can lack a network or sponsor to help them advance. Some financial firm representatives noted that employee resistance, particularly from middle-managers, poses a challenge to diversity and inclusion efforts. Additionally, some organizations that advocate for women and minorities noted that unconscious bias is an issue that can negatively affect women and minorities. As an example, managers may give hiring or promotion preferences to persons who have hobbies or educational backgrounds similar to theirs. Also, the authors of a 2014 report on women in senior management at financial and nonfinancial organizations across 40 countries suggested that unconscious bias against women can result in a reluctance to promote women in the expectation that they will eventually put family first. The report stated that this bias can trigger a self-fulfilling prophecy, as lack of promotion is one of the top reasons cited by women for leaving their jobs. Reports on diversity, representatives from financial services firms, and other stakeholders described a variety of practices that may be helpful in retaining women and racial/ethnic minorities. These practices include the following. Establishing affinity groups. Representatives from four financial services firms stated that having affinity groups helps promote both diversity and inclusion. Affinity groups—sometimes referred to as employee resource groups or networking programs—provide forums for employees to gather socially and share ideas outside of their particular work unit. Representatives from two firms emphasized that it is important for affinity groups to have meetings with firm leadership. A 2007 study reported that networking programs have stronger effects on some demographic groups than others. Training managers and employees on inclusion and unconscious bias. Several financial firm representatives emphasized the importance of offering training to foster an inclusive work environment. As previously noted, an inclusive work environment is one that encourages employees to feel valued for their unique qualities and experience a sense of belonging. Training on inclusiveness, emotional intelligence, and unconscious bias were specifically noted by two financial firm representatives as being helpful for both managers and staff. Establishing management-level accountability. Representatives from three financial firms told us that firm management should be held accountable for the firm’s workforce diversity goals. Managers’ performance in maintaining a diverse workforce can be evaluated a variety of ways. For example, two firm representatives discussed the use of “diversity scorecards.” A diversity scorecard is a set of objectives and measures derived from an organization’s overall business strategy and linked to its diversity strategy. Additionally, one firm representative noted that tying senior managers’ compensation to diversity goals has been an effective practice for retaining women and minorities. Researchers have noted that efforts to establish organizational responsibility for diversity lead to the broadest increases in managerial diversity. Offering staff mentors and sponsors. Representatives from three financial firms and two organizations that advocate for the financial services industry told us that providing staff with mentors or sponsors helps retain and promote women and racial/ethnic minorities. In general, a mentor provides advice and guidance to more junior staff (protégés) and a sponsor nominates or supports a protégé’s promotion. Research and reports discuss the benefits of mentors and sponsors. Implementing family-friendly policies. Some of the financial services firm representatives and three of the four individuals with whom we met (members of racial minority groups who had worked in large financial services firms) noted the importance of work-life balance to help retain women. A 2011 paper on the Canadian financial sector described selected banks’ family-friendly policies, such as flexible work schedules, that facilitate work-life balance. As previously noted, in 2005 we identified a set of nine leading diversity management practices that should be considered when an organization is developing and implementing diversity management. These practices include measuring the impact of diversity programs and providing training for management and staff on diversity. Financial firm representatives and other stakeholders with whom we met agreed that these practices are still relevant. However, researchers have found that practices related to diversity may not benefit all genders and racial/ethnic groups evenly. For example, a 2015 consulting firm report found that the approach of many companies to cover all groups (racial/ethnic, gender, and sexual orientation) using a single diversity program is insufficient. The report found that diversity- related practices should be tailored to specific groups. Earlier empirical research similarly found that the effects of various diversity-related initiatives varied across gender and race/ethnicity groups. Firms and Stakeholders Generally Agree on the Value of Assessing Workforce Diversity and Inclusion, but Differ on Benefits of Making Data Public Representatives of financial services firms told us that it is useful for financial services firms to analyze demographic data to assess diversity of their workforce and identify trends that may need to be addressed. All of the financial services firms with whom we met agreed on the importance of analyzing employee data. Some firm representatives noted that by assessing employee data they can analyze the gender and racial/ethnic diversity of new hires, employees leaving the organization, and newly promoted staff and managers. Representatives from several firms stated that it is important for organizations to be self-aware of how they are doing with workforce diversity. Also, representatives from an investment bank told us that they analyze employee data over time to determine whether certain demographic groups tend to leave the firm after a certain number of years. With this information, the representatives told us, the organization can proactively take steps to help retain these staff, such as providing staff with mentors. Additionally, representatives from a large bank explained that by analyzing demographic data of employees, the organization can identify “leaks” in their internal pipeline. That is, they can determine when and potentially why women and racial/ethnic minorities leave before progressing into management positions. Several financial firm representatives told us that when they identify data trends that indicate problems, such as retention issues, they then take steps to address them. Several financial firm representatives stated it is important to know the demographic make-up of employees, because firms should look like their customers. As an example, a representative of an investment banking institution told us that over half of the firm’s customers were women; therefore it was a priority for the organization to know how to serve them as well as other diverse groups. Also, a firm representative told us that some potential clients call inquiring about racial and gender diversity before doing business. The representative added that clients are interested in receiving advice and information from advisors to whom they can relate. Additionally, representatives from a large financial services firm stated workforce diversity helps the firm better understand its diverse customers. Representatives of three financial services firms with whom we met also described the importance of obtaining employees’ views about the organization, including employees’ feelings about diversity and inclusion. For example, a financial services firm representative told us that in order to be successful at fostering workforce diversity firms must obtain employees’ views on work/life balance, opportunities for advancement, and inclusiveness. He noted that while quantitative data on employees’ demographic characteristics may indicate that the workforce has become more diverse, employees may not feel like the workplace has become more diverse. Three of the organizations with whom we met (two that advocate for the financial services industry and one that advocates for diversity) agreed on the importance of surveying employees about diversity and inclusion. For example, representatives from a financial services industry trade group told us that employee surveys can be used to detect issues that minority employees face. Research points out that having diversity management practices alone is insufficient for improving workplace performance. This research finds that productive workplaces exist when inclusion is promoted and employees are encouraged to express their opinions and their input is sought before making important organizational decisions. Representatives of financial services firms and organizations that advocate for diversity varied in their views on whether data on the demographic characteristics of employees at specific financial services firms should be shared publicly, for example through diversity indexes or on the company’s website. Representatives from two financial firms told us that publicly disclosing firm-level employee characteristics would not benefit the company. More specifically, representatives from two financial services firms indicated that diversity indexes are of limited value because they do not indicate whether a firm has made progress on diversity. One representative noted the reputation of firms that are not diverse could be damaged, which could make improvement of workforce diversity more difficult. As discussed earlier, potential candidates’ negative perceptions of the financial services industry’s reputation can make it difficult for firms to recruit diverse employees. In contrast, representatives from one of the financial services firms and two organizations that advocate for diversity told us that making data on the diversity of firms’ workforce publicly available was beneficial because it highlighted firms’ diversity efforts. As an example, representatives from a large financial services firm told us that the firm regularly participates in a number of surveys on diversity, which third-parties use to create various diversity indexes. The indexes highlight this firm’s progress on employee diversity. Additionally, several of the firms with whom we met post data on their websites indicating demographic information about their employees, such as the proportion of women in management and employees’ country of origin. Representatives of organizations that advocate for diversity in the workplace cited the benefits of diversity indexes and the publication of workforce diversity information on specific financial services companies. For example, one representative stated that requiring businesses to be transparent about their workforce diversity data creates incentives to improve the diversity of their workforce. A representative from an organization that advocates for women noted that diversity indexes or other public information can be helpful for investors, who want to know about the workforce composition of the businesses that they may invest in. This representative stated that institutional investors have been leading the charge for more transparency and diversity among companies. We have previously reported on large investors’ interest in having more public disclosure about the diversity of corporate board directors. Agency Comments We provided a draft of this report to EEOC. We received technical comments, which we addressed as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of this report until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees and the Acting Chair of the Equal Employment Opportunity Commission. We will make copies available to others upon request. The report will also be available at no charge on our website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8678 or garciadiazd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs are listed on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Appendix I: Objectives, Scope, and Methodology The objectives of this report were to examine (1) trends in management- level diversity in the financial services industry, (2) trends in diversity among potential talent pools, and (3) challenges financial services firms identified in trying to increase workforce diversity and practices firms used to increase workforce diversity. Trends in Management- Level Diversity To describe management-level diversity in the financial services industry, we obtained 2007–2015 workforce data from the Equal Employment Opportunity Commission’s (EEOC) Employer Information Report (EEO- 1). EEO-1 data are annually submitted to EEOC by most private-sector firms with more than 100 employees. Most federal contractors with 50 or more employees are also required to submit to EEOC annual reports showing the composition of their workforce; however, consistent with our 2006 and 2013 reports, we did not include these contractors in our analysis. Accordingly, the EEO-1 data presented in this report do not exactly match the EEO-1 data on EEOC’s website. We found that these differences were small and did not materially change the trends in the representation of various demographic groups. We obtained EEO-1 data in February 2017 for the finance and insurance industry categorized under the North American Industry Classification System (NAICS) code 52 from 2007 through 2015, the most recent year of data available. EEO-1 data were specifically obtained for each job category by gender, race/ethnicity, firm size, and industry sectors. We used the race/ethnicity categories used by EEOC: African-American, Asian, Hispanic, and Other. The “Other” category, which represents less than 3 percent of the financial services workforce, includes Native Hawaiian or Pacific Islander, Native American or Alaska Native, and “two or more races.” Job categories include: senior-level managers, first- and mid-level managers, professionals, technicians, sales workers, administrative support workers, craft workers, operatives, laborers and helpers, and service workers. We defined “overall management” as senior-level managers and first- and mid-level managers. We compared 2007 through 2015 EEO-1 data on the financial services industry to comparable information we previously published using EEO-1 data on diversity trends in the financial services industry from 1993–2006. Because the EEOC data do not come from a sample, but are collected from all businesses, we did not calculate standard errors or confidence intervals on our estimates. To compare diversity trends in the financial services industry with the overall private sector and the “professional and technical services sector,” we downloaded 2007 through 2015 EEO-1 data on the overall private sector and the professional and technical services sector from the EEOC website. We excluded data for the financial services industry from the data representing the “overall private sector.” The professional and technical services sector is categorized under the NAICS code 54, and includes establishments that specialize in performing professional, scientific, and technical activities for others, such as accounting, bookkeeping, payroll services, and consulting services. For the financial services industry, we used the data provided to us by EEOC, which, as discussed earlier, does not include federal contractors with fewer than 100 employees and therefore does not precisely match data on EEOC’s website. We chose not to rely on data from the EEOC website for this comparison so that data on the financial services sector would be from a consistent source throughout the report. We compared the representation of racial/ethnic minorities and women in management positions across all three sectors from 2007 through 2015. To determine the reliability of the EEO-1 data from EEOC that we used throughout this report, we interviewed knowledgeable EEOC officials and reviewed relevant documents provided by agency officials and obtained on its website. We also conducted electronic testing of the data. We determined that the EEO-1 data were sufficiently reliable for describing workforce diversity trends. Trends in Potential Talent Pools To describe recent trends in diversity among potential external talent pools (potential source of future managers outside the firms) for positions in the financial services sector, we interviewed representatives from three financial services firms about the preferred educational requirements needed to enter the field. We then used educational attainment data available from the Department of Education’s Integrated Postsecondary Education Data System (IPEDS) to analyze the race/ethnicity and gender characteristics of individuals receiving undergraduate degrees, master’s degrees (of all subjects), and Master of Business Administration (MBA) degrees for the school years ending 2011 through 2015. At the time of our review, data for the school year ending in 2015 were the most recent data available. Through a review of documentation and electronic testing, we found the IPEDS data to be sufficiently reliable for describing trends in educational attainment. To describe recent trends in diversity among potential internal talent pools for management positions, we first identified the nonmanagement positions that were most likely to feed into management by reviewing an EEOC report on diversity in financial services and analyzing job descriptions and education requirements for nonmanagement positions in the financial services sector. Based on this information, we determined that the professional and sales job categories best represent the primary internal talent pool for management positions in the financial services industry. We then analyzed EEO-1 data for NAICS code 52 to identify trends in the representation of women and racial/ethnic minorities in professional and sales positions from 2007 through 2015. We compared these trends to trends in the representation of women and racial/ethnic minorities in overall management positions in the financial services industry. Challenges and Practices Related to Increasing Workforce Diversity To identify challenges financial services firms face in trying to increase workforce diversity as well as practices financial services firms use to improve workforce diversity, we conducted a literature review. We used research databases such as ProQuest and SCOPUS to search for scholarly or peer-reviewed material, government reports, conference papers, trade and industry articles, and association or nonprofit publications published from 2006 through 2016. Also, we used Internet search techniques and keyword search terms to identify publicly available information about workforce diversity in the financial services sector as of August 2017. In cases where the studies or articles referenced older materials that focused on workforce diversity practices, we reviewed those as well. In addition, we interviewed: representatives from 13 financial services firms that were actively involved in workforce diversity efforts, representatives of 11 organizations that advocate for the financial services industry, women or racial/ethnic minorities, or both. We also interviewed a selection of two male and two female members of racial minorities who formerly worked for large financial services firms. We interviewed representatives from 9 of the 13 financial services firms in a group setting. Based on the group-discussion format, we did not collect precise counts of the participants who agreed or disagreed with specific practices or challenges. Financial services firms were selected based on their participation at a conference on improving diversity in the financial services industry, their participation in our previous work, and suggestions from organizations that represent the financial services industry. Former employees were selected based on their participation in a conference on diversity in financial services or their experience in the financial services industry. We also attended a conference on diversity in the financial services sector. To determine how financial services firms assess their diversity policies and practices, we interviewed representatives of financial services firms as well financial services industry trade groups. The views expressed by firms, trade organizations, and former employees may not be representative of all entities involved in workforce diversity efforts. We used certain qualifiers when collectively describing responses from financial services firms and trade groups, such as “some,” “several,” and “most.” We define some as four, several as at least five but less than most, and most as more than half relative to the total number possible. We also reviewed academic and other research studies on the effect of specific workforce diversity policies. We conducted this performance audit from August 2016 through November 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Additional Analysis of Diversity Trends in the Financial Services Industry This appendix provides additional detailed analysis of EEOC data on the financial services industry from 2007 through 2015. Analysis by Gender, Race/Ethnicity, and Management Level The representation of minority women in first- and mid-level management increased by 1.6 percentage points from 2007 through 2015 while their representation in senior-level management increased by 0.3 percentage points during this time (see fig. 19). Women’s representation among specific racial/ethnic groups did not change by more than 1 percentage point for any specific group at either management level from 2007 through 2015. The representation of minority men in first- and mid-level management increased by 2.2 percentage points from 2007 through 2015 and their representation in senior-level management increased by 1.5 percentage points (see fig. 20). Men’s representation among specific racial/ethnic groups did not change by more than 1 percentage point at the senior management level. In contrast, at the first- and mid-level management position, Asian men experienced an increase in their management representation of 1.7 percentage points. Men of other races/ethnicities did not experience changes in their representation at the first- and mid-level management position of more than 1 percentage point. Analysis by Firm Size Representation of minorities in overall management increased from 2007 through 2015 in firms of all sizes, with the greatest increases occurring in firms with over 1,000 employees (see fig. 21). Representation of Asians, Hispanics, and Other in management positions increased over time in firms of all sizes while representation of African-Americans in management decreased by less than 1 percentage point or stayed the same from 2007 through 2015 in firms of all sizes. In 2015, Asians and African-Americans had the largest percentage of minority representation, 8.7 percent and 7.1 percent respectively, in firms with over 5,000 employees. Appendix III: Diversity in the Financial Services Industry by State, 2015 This appendix provides information on management representation in the financial services industry by state in 2015. Appendix IV: Diversity Trends in Degrees Earned and Nonmanagement Job Categories This appendix provides additional information about the potential external and internal talent pools for the financial services sector. Table 2 includes information on the demographic characteristics of persons obtaining undergraduate-level and graduate-level degrees for the school years ending from 2011 through 2015. Tables 3 through 7 show the representation of various demographic groups working in the Professional and Sales job categories of the financial services sector from 2007 through 2015. Appendix V: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Kay Kuhlman, Assistant Director; Lisa Moore, Analyst in Charge; Rachel Batkins; Ben Bolitzer; Mitch Karpman; Jill Lacey; May Lee; John Mingus; Tovah Rom; Kelsey Sagawa; Jena Sinkfield; and Tyler Spunaugle made major contributions to this report.
Why GAO Did This Study The U.S. workforce force has become increasingly diverse and is projected to become even more diverse in the coming decades. As a result, many private sector organizations have recognized the importance of recruiting and retaining minorities and women for key positions to improve their business or organizational performance and help them better meet the needs of a diverse customer base. The financial services industry is a major source of employment in the United States and affects the economic well-being of its customers. However, questions remain about diversity in the financial services industry, which provides services that help families build wealth and are essential to economic growth. GAO was asked to analyze diversity trends in the financial services industry, particularly in management positions. This report examines (1) trends in management-level diversity in the financial services industry from 2007 through 2015, (2) trends in diversity among potential talent pools, and (3) challenges financial services firms identified in trying to increase workforce diversity and practices firms used to address them. GAO analyzed data from the Equal Employment Opportunity Commission (EEOC) and the Department of Education. The most recent available data were from 2015. GAO also reviewed studies on workforce diversity and interviewed representatives from financial services firms and organizations that advocate for the financial services industry, women, or minorities. EEOC provided technical comments on a draft of this report that GAO incorporated as appropriate. What GAO Found Overall representation of minorities in first-, mid-, and senior-level management positions in the financial services industry increased from about 17 percent to 21 percent from 2007 through 2015. However, as shown in the figure below representation varied by race/ethnicity group and management level. Specifically, representation of African-Americans at various management levels decreased while representation of other minorities increased during this period. Overall representation of women was generally unchanged during this period. Representation of women among first- and mid-level managers remained around 48 percent and senior-level managers remained about 29 percent from 2007 through 2015. Potential employees for the financial services industry, including those that could become managers, come from external and internal pools. For example, the external pool includes those with undergraduate or graduate degrees, such as a Master of Business Administration. In 2015, about 33 percent of the external pool included minorities and around 60 percent were women. The internal talent pool for potential managers in financial services includes those already in professional positions. In 2015, nearly 28 percent of professional positions in financial services were held by minorities and just over 51 percent were held by women. Research, financial services firm representatives, and financial industry stakeholders described challenges to recruiting and retaining members of racial/ethnic minority groups and women and practices that could help address these challenges, including recruiting from a wider variety of schools. Firm representatives said that it is important for firms to assess firm-level data on diversity and inclusiveness. However, firm representatives and other stakeholders differed in their views on whether firm-level diversity data should be made public. For example, one stakeholder stated that sharing diversity data publicly would create incentives for improvement. However, a firm representative said that for firms that are not diverse, making employee diversity data public could make improvement of workforce diversity more difficult for them.
gao_GAO-19-221
gao_GAO-19-221_0
Background The federal government receives funds from numerous sources in addition to tax revenues, including collections of user fees, fines, and penalties. According to the Budget of the U.S. Government, in fiscal year 2017, the U.S. government’s total receipts were $3.3 trillion and collections of fees, fines, penalties, and forfeitures were more than $350 billion. User fees (fees): Fees are charges assessed to users for goods or services provided by the federal government, such as fees to enter a national park, and charges assessed for regulatory services, such as fees charged by the Food and Drug Administration for prescription drug applications. Fees are an approach to financing federal programs or activities that, in general, are related to some voluntary transaction or request for government services above and beyond what is normally available to the public. By requiring identifiable beneficiaries to pay all or part of the cost of a good or service, fees can promote both equity and economic efficiency. Regularly reviewing fees help ensure that agencies, Congress, and stakeholders have complete information. Fines and penalties: Criminal fines and penalty payments are imposed by courts as punishment for criminal violations. Civil monetary penalties are not a result of criminal proceedings but are employed by courts and federal agencies to enforce federal laws and regulations. For example, civil monetary penalty payments are collected from financial institutions by certain financial regulators, such as the Federal Deposit Insurance Corporation, from enforcement actions assessed against financial institutions for violations related to anti-money laundering requirements. Reviews and, as needed, adjustments to fines and penalties could help ensure they provide a meaningful incentive for compliance. The design and structure of statutory authorities for fees, fines, and penalties can vary widely. In prior work, we have identified key design decisions related to how fee, fine, and penalty collections are used that help Congress balance agency flexibility with congressional control and oversight. Congress determines the availability of collections by defining the extent to which an agency may obligate and expend them, including the availability of the funds, the period of time the collections are available for obligation, the purposes for which they may be obligated, and the amount of the collections that are available to the agency. Fees, fines, and penalties may be categorized as one of three types of collections based on the structure of their statutory authority: offsetting collections, offsetting receipts, or governmental receipts (see figure 1). Offsetting collections can provide agencies with more flexibility because they are generally available for agency obligation without an additional annual appropriation. In contrast, offsetting receipts and governmental receipts involve greater congressional opportunities for control and oversight because, generally, additional congressional action is needed before the collections are available for agency obligation. For example, Congress must appropriate collections from offsetting receipts before agencies are authorized to obligate these funds. The type of collection also determines how OMB and Treasury report the collections. Offsetting collections and offsetting receipts result from businesslike transactions and are recorded as offsets to spending. Offsetting collections are authorized by law to be credited to appropriation or fund expenditure accounts, while offsetting receipts are deposited in receipt accounts. Because offsetting collections are offsets to spending, an account will generally show the net amount that was collected and spent at any point in time. Congressional Actions to Make Government-wide Data Publicly Available While there is no statutory requirement for government-wide reporting of data of specific fees, fines and penalties, Congress has enacted legislation to make other data on federal spending and federal programs publicly available: The Digital Accountability and Transparency Act of 2014 (DATA Act). The DATA Act built on previous transparency legislation by expanding what federal agencies are required to report regarding their spending. The act significantly increased the types of data that must be reported, and required the use of government-wide data standards and regular reviews of data quality to help improve the transparency and accountability of federal spending data. These data are reported on the USAspending.gov website. The GPRA Modernization Act of 2010 (GPRAMA). GPRAMA, in part, requires OMB to present a coherent picture of all federal programs by making information available about each federal program on a website, including related budget and performance information. Programs have been defined as an organized set of activities directed toward a common purpose or goal that an agency undertakes or proposes to carry out its responsibilities. A federal program inventory would consist of the individual programs identified by the agencies and OMB and information collected about each of them. OMB and agencies implemented the inventory once, in May 2013. In October 2014, we found several issues limited the usefulness of that inventory and made several recommendations to OMB to ensure the effective implementation of federal program inventory requirements and to make the inventories more useful. Further, in September 2017, we found that OMB continued to delay implementation of the program inventory. We recommended that OMB consider a systematic approach to developing the program inventory and issue instructions to provide time frames and milestones for its implementation. Although OMB updated its instruction in June 2018, it did not provide any time frames or milestones for implementing the inventory. OMB has yet to develop a systematic approach for resuming implementation of the inventory or specific time frames for doing so. OMB, Treasury, and Agencies Publicly Report Some Data on Fees, Fines, and Penalties, but the Data Have Significant Limitations OMB, Treasury, and Agencies Report Broad Financial Information, but Not All Collections from Specific Fees, Fines, and Penalties There is no source of data that lists all collections of specific fees, fines, and penalties at a government-wide or agency level. Both OMB and Treasury report government-wide budgetary and financial data, including some information on collections of fees, fines, and penalties; however, none of the reports identifies all specific fees, fines, and penalties, and their associated collection amounts at a government-wide level. OMB reports budgetary and financial data in various parts of the Budget of the U.S. Government, including Analytical Perspectives, the Budget Appendix, and the Public Budget Database. Treasury reports financial data in the Combined Statement. Each source provides information for a broader purpose than reporting on collections of fees, fines, and penalties. OMB and Treasury provide specific instructions for agency submission of the underlying data, as described in table 2. OMB’s reports include budgetary and financial information on federal collections at different levels of detail—from aggregated government-wide data to agency account-level data—depending on the source and its purpose. Analytical Perspectives identifies collections as fees and as fines, penalties, and forfeitures and reports government-wide summary information on these collections. For example, in a table summarizing government-wide governmental receipts in Analytical Perspectives, OMB reported fines, penalties, and forfeitures in federal funds as $20.98 billion and in trust funds as $1.17 billion for fiscal year 2017. These summary data do not provide a government-wide total of all federal collections from fines, penalties, and forfeitures because they do not include those that are categorized as offsetting collections or offsetting receipts, according to OMB staff. OMB staff said that OMB does not publish a government- wide total of fines, penalties, and forfeitures. OMB data on governmental receipts include source codes—including a code that identifies fines, penalties, and forfeitures—but data on offsetting collections and offsetting receipts do not include a comparable source code. In the Budget Appendix and the Public Budget Database, OMB reports account-level information by agency, identified by types of collections, such as offsetting collections, offsetting receipts, and governmental receipts. The Budget Appendix and the Public Budget Database do not label collections as fees, fines, or penalties and therefore, cannot be used to calculate government-wide totals for fees, fines, or penalties. To assemble Analytical Perspectives, the Budget Appendix, and the Public Budget Database, OMB compiles data from federal agencies into OMB MAX. OMB MAX, which is not publicly available, contains government-wide data at the account level and captures information such as the type of collection and the type of fund to which collections are deposited. While the data in OMB MAX help drive reporting in the Budget, not all data compiled in OMB MAX appear in the Budget. For example, OMB MAX includes an indicator for accounts that contain fees, but that information is not made available in the Budget of the U.S. Government. According to congressional staff we spoke with, they do not have open access to OMB MAX, but OMB provides excerpts of OMB MAX data to staff upon request. Treasury’s Combined Statement reports both government-wide totals and agency account-level data for collections classified as receipts, by various source categories—such as proprietary receipts from the public, miscellaneous receipts, and fines, penalties, and forfeitures. Fees. Fees may fall within several source categories. Therefore, Treasury does not have a single government-wide total for fees. It does present government-wide totals for various source categories, including, Sale of Products and Fees for Permits and Regulatory and Judicial Services, for example. Treasury also reports some fees under non-fee categories, such as Miscellaneous Taxes and Excise Taxes. Fines, Penalties, and Forfeitures. Treasury reports a government- wide total of receipts of fines, penalties, and forfeitures, which in fiscal year 2017 was $22.2 billion. Treasury’s Combined Statement presents these data, disaggregated by account, in the tables Receipts by Source Categories and Receipts by Department. For example, it identifies total Internal Revenue Service receipts in the category Fines, Penalties, and Forfeitures of about $6.8 million in fiscal year 2017. Treasury also reports some fines, penalties, and forfeitures receipts under other categories; these receipts are not included in its total of fines, penalties, and forfeitures. For example, Department of Homeland Security breached bond penalties are reported in two categories labeled as fees: Miscellaneous Receipts – Fees for Permits and Regulatory and Judicial Services and Offsetting Governmental Receipts – Regulatory Fees (see figure 2). In addition to the government-wide data sources, agencies report some data on their collections of specific fees, fines, and penalties in their annual financial reports, congressional budget justifications, and on agency websites. These data are dispersed by agency, are not comprehensive, and cannot be aggregated to create government-wide data because they vary in format and in the level of detail presented. For example: The Environmental Protection Agency (EPA) has an online, searchable database of enforcement and compliance information that includes data on individual fine and penalty assessments for violations of certain, but not all, statutes. The Department of Labor also makes selected enforcement data accessible in an online database collected by the Employee Benefits Security Administration, the Mine Safety and Health Administration, the Occupational Safety and Health Administration, and the Wage and Hour Division without Department of Labor-wide data standards on individual fine and penalty assessments. USDA’s Animal and Plant Health Inspection Service’s 2019 Congressional Budget Justification, on the other hand, is a PDF document that provides annual collection totals for Agriculture Quarantine Inspection Fees, Import-Export User Fees, Phytosanitary Certificate User Fees, Veterinary Diagnostics User Fees, and Other User Fees, rather than disaggregated to individual fee assessments. OMB Reports Government-wide Totals that Cannot Be Disaggregated and Does Not Disclose Limitations or Regularly Review Its Designation of Fees OMB Reports Government- wide Data that Cannot Be Disaggregated The government-wide totals for fees that OMB reports in Analytical Perspectives are not presented at a more disaggregated level, such as by agency or program, except for some major fee collections identified by OMB. For example, in Analytical Perspectives for fiscal year 2017, OMB reported $335.4 billion as a government-wide total of fee collections. OMB also reported some disaggregated data for the subset of fees that were offsetting collections and offsetting receipts. Specifically, it listed 11 fees totaling $258.4 billion collected by specific agencies and listed the remaining $72.3 billion as “all other user charges” without identifying the agency or program. As described in table 1 above, clear and accessible data can be aggregated or disaggregated by the user. OMB has more detailed data on collections in OMB MAX, including the agency, account, type of collection, and fund type, which it uses to compile reported totals of fees as well as fines, penalties, and forfeitures. OMB does not publicly report these data disaggregated below the government-wide level, such as at the agency level. OMB staff said that they do not report the disaggregated data because the purpose of Analytical Perspectives is to develop or support the President’s policies and more detailed tables may not be included if they are not considered necessary for that purpose. However, Analytical Perspectives also serves to provide other significant data that place the President’s Budget in context and assist the public and policymakers in better understanding the budget proposals. For example, Analytical Perspectives includes a chapter on aid to state and local governments that presents the President’s budget proposals for grant programs along with crosscutting information on federal grants to state and local governments, including government-wide grant spending, by agency and program. Analytical Perspectives also presents a summary of fee proposals but does not provide comparable crosscutting information about current fees. For fines and penalties, neither proposals nor crosscutting information is presented by agency. Until OMB makes more disaggregated data on fees, fines, and penalties maintained in its OMB MAX database—such as collections by agency—publicly available, Congress has limited information on such collections to inform oversight and decision-making. OMB Does Not Disclose Limitations or Regularly Review Its Designation of Fees Analytical Perspectives’ government-wide totals of fees may include inaccurately labeled collections—other collections that are not fees—and may exclude some fee collections. Data that are clear and accessible are presented with known limitations, as shown in table 1. OMB Circular No. A-11 states that all accounts in which more than half of collections are from fees will be designated as containing fees. OMB staff said that the entire account is designated as containing fees because account-level data are the most disaggregated data OMB collects from agencies. OMB calculates its government-wide total for fees by adding collections in all accounts designated in OMB MAX as containing user fees. However, agency accounts can include multiple sources of budget authority. For example, Treasury’s U.S. Mint’s account “United States Mint Public Enterprise Fund” includes offsetting collections from Mint operations and programs; these include the production and sale of commemorative coins and medals, the production and sale of circulating coinage, the protection of government assets, as well as gifts and bequests of property. The United States Mint Public Enterprise Fund is designated as containing fees in OMB MAX. Therefore, budget authority that is not derived from the collection of fees but is still included in this account will be designated as fees as well when calculating a government-wide total. Conversely, accounts in which fees contribute to less than half of collections are not designated as containing fees amounts, and those fees will not be included in the government-wide total OMB calculates. OMB Circular No. A-11 describes the designation of fee accounts, but the data presented in Analytical Perspectives as totals for fees do not disclose OMB’s designation criteria, including the limitations to the accuracy of the data. OMB staff said they do not report this limitation because they consider OMB Circular No. A-11 a more appropriate document for providing technical information like the designation of accounts containing user fees. However, the section on fees in Analytical Perspectives does not direct the reader to OMB Circular No. A-11 for key information related to the data presented on fees. For other topics, including lease-purchase agreements, Analytical Perspectives directs the reader to OMB Circular No. A-11 for further details. Furthermore, for other topics, OMB provided explanatory information along with the data in Analytical Perspectives. For example, OMB explained a recent change to definitions in the research and development section of Analytical Perspectives and the effect of the change on budget authority. Until OMB provides a description of data limitations regarding the criteria used to identify accounts with fees for compiling government-wide totals in Analytical Perspectives, or directs users to the relevant section of OMB Circular No. A-11, some users are likely to be unaware of the potential for the total user fees to be overestimated or underestimated. In addition, OMB does not regularly review and update implementation of its criteria for designating fees. Standards for Internal Control in the Federal Government state that agency management should use quality information to achieve the objectives, such as processing data into quality information that is current and accurate. OMB Circular No. A-11 states that the fee designation is applied at the time the account is established. OMB staff told us that when establishing a new account, OMB collaborates with Treasury to determine the legal attributes of the account, including any fee authorities, and whether to designate the account as containing fees. OMB staff further explained they review the designation when new legislation is enacted that would change the attributes of the account, or if an agency informs OMB that the makeup of an account has changed because of programmatic changes. However, OMB Circular No. A-11 does not instruct agencies to regularly review or update this designation and report changes to OMB. Therefore, if the makeup of collections in an account changes so that fees go from being more than half of the collections to less than half, or vice versa, the account’s fee designation may not be updated accordingly. Until OMB instructs agencies to regularly review the fee designation in OMB MAX and update the designation, as needed, OMB cannot provide reasonable assurance that accounts are designated correctly, and that the government-wide totals of fees reported in Analytical Perspectives are accurate. OMB and Treasury Sources Do Not Completely Identify Fees, Fines, and Penalties Users Cannot Disaggregate the Agency Account-Level Data to Specific Fee, Fine, and Penalty Collections While Analytical Perspectives reports government-wide data labeled as fees, fines, and penalties, the other three sources we reviewed—the Budget Appendix, the Public Budget Database, and the Combined Statement—report account-level information by agency. Users cannot further disaggregate the data presented to specific fee, fine, and penalty collections. For example, USDA’s Animal and Plant Health Inspection Service (APHIS) is funded in part by six fees: (1) Agricultural Quarantine Inspection (AQI) fee, (2) Phytosanitary Export Certification fee, (3) Veterinary Services Import Export fee, (4) Veterinary Diagnostics fee, (5) Reimbursable Overtime, and (6) Trust Funds and Reimbursable Funds. However, a user cannot identify collections from each of these APHIS fees in the Budget Appendix. The Budget Appendix specifically identifies AQI fee collections—$768 million in fiscal year 2017—because they are receipts deposited to a trust fund. The other five fees are combined within the total for offsetting collections—$152 million (see figure 3). The Budget Appendix, the Public Budget Database, and the Combined Statement report data at the account level because the purposes of these reports are broader than fees, fines, and penalties, and OMB and Treasury instruct agencies to report data at that level. Treasury’s Financial Manual states that agencies post appropriations and spending authorizations by Congress to accounts established by Treasury. OMB’s Circular No. A-11 instructs agencies to report data at the budget account level in OMB MAX, which supports the data in the Budget Appendix and the Public Budget Database. Because OMB and Treasury do not collect data that can be disaggregated to the level of fee, fine, or penalty, the collections for specific fees, fines, and penalties within accounts are not identifiable within account totals. OMB Data Sources Label Data More Broadly than Fees, Fines, and Penalties Both the Budget Appendix and Public Budget Database label and present data within each account by collection type: offsetting collections, offsetting receipts, and governmental receipts. These collection types include fees, fines, and penalties, as well as other sources of collections, as shown in the text box below. Budgetary Collections as Labeled by the Budget of the U.S. Government Include More than Fees, Fines, and Penalties Offsetting Collections and Offsetting Receipts include user fees as w ell as reimbursements for damages, intragovernmental transactions, and voluntary gifts and donations to the government. Governmental Receipts include collections that result from the government’s exercise of its sovereign pow er to tax or otherw ise compel payment, and include taxes, compulsory user fees, regulatory fees, customs duties, court fines, certain license fees, and deposits of earnings by the Federal Reserve System. As a result, the user cannot separate fees, fines, and penalties from other collections. For example, offsetting collections may include fees, reimbursements for damages, gifts or donations of money to the government, and intragovernmental transactions with other government accounts. Analytical Perspectives explains that amounts collected by government agencies are recorded in two ways that broadly affect the formulation of the government-wide budget, but may not provide detail on specific agency collections: (1) governmental receipts, which are compared to total outlays in calculating the surplus or deficit; and (2) offsetting collections or offsetting receipts, which are deducted from gross outlays to calculate net outlay figures. These collections are presented together for budgeting purposes, but cannot be separated to specific fees, fines, or penalties. Therefore, it is not clear what percentage of the reported collections are fees, fines, and penalties as opposed to other collections. OMB Does Not Clearly Describe How the Public Budget Database Reports Certain Fee, Fine, and Penalty Collections Treasury’s Combined Statement and OMB’s Public Budget Database do not identify offsetting collections, including collections of fees, fines, and penalties. Instead, the Combined Statement reports net outlays, which include any offsetting collections as deductions from outlays. Similarly, the Public Budget Database reports budget authority net of any offsetting collections. Treasury clearly describes this presentation of the data in the Combined Statement, but OMB does not in the Public Budget Database. In the “Explanation of Transactions and Basis of Figures” section of the Combined Statement, Treasury describes that outlays are stated net of collections representing reimbursements as authorized by law, which include offsetting collections. With the description provided in the Combined Statement, the user can understand that fees, fines, and penalties that are offsetting collections are not identifiable in the data. OMB reports receipts and budget authority—which include collections from fees, fines, and penalties—in separate spreadsheets of the Public Budget Database. Similar to outlays reported in Treasury’s Combined Statement, the Budget Authority spreadsheet reports the net budget authority of accounts after agencies have credited offsetting collections from fees, fines, penalties, or other collections. For example, the National Park Service reported net budget authority of $2.425 billion for the Operation of the National Park System account in fiscal year 2017 in both the Budget Appendix and the Public Budget Database, both of which present data compiled in OMB MAX. The Budget Appendix presents additional information, reporting offsetting collections that are at least partially derived from fees of $35 million, and gross budget authority of $2.46 billion, as shown in figure 4. The Public Budget Database, on the other hand, does not identify the amount of offsetting collections in the account or gross budget authority. OMB does not describe this presentation of the data in the Public Budget Database User’s Guide. As shown in table 1, data that are clear and accessible are presented with descriptions of the data. The User’s Guide directs users who may not be familiar with federal budget concepts to Analytical Perspectives and OMB Circular No. A-11. However, OMB does not describe, either in the User’s Guide or in the Budget Authority spreadsheet of the Public Budget Database, that this source reports budget authority net of offsetting collections, such as collections of fees, fines, and penalties. OMB staff said they do not describe the presentation because it is explained in Analytical Perspectives. However, the Public Budget Database is available for download separate from Analytical Perspectives, and the User’s Guide specific to the Public Budget Database includes other information describing the data in the spreadsheets. Describing the presentation of the data in the User’s Guide would help ensure that users of the Public Budget Database can correctly interpret the information and not underestimate agencies’ fee, fine, or penalty collections. Government-wide Sources Do Not Consistently Report Data that Would Facilitate Oversight No source of government-wide data consistently reports data elements related to fees, fines, and penalties that could help inform congressional oversight of agencies and programs, such as the amount collected annually, account balances, and whether the collection is a fee, fine, or penalty. See figure 5 for the extent to which data elements are included in the Budget Appendix, Public Budget Database, and Combined Statement. See appendix I for more detailed information on the data elements that are useful for congressional oversight. To a limited extent there are some cases where government-wide reports included data elements useful for the purpose of congressional oversight of fees, fines, and penalties. In some cases the Budget Appendix includes information on the fund type receiving collections and the extent to which the collections from fees may be appropriated to the agency collecting the fee. The Budget Appendix, for example, reports that collections for the Agricultural Quarantine Inspection (AQI) fee are recorded under “Special and Trust Fund Receipts,” as shown previously in figure 3. The user can also identify the appropriation of collections from the AQI fee under “Program and Financing, Budgetary resources,” as shown below in figure 6. As discussed previously, the other five fees the Animal and Plant Health Inspection Service(APHIS) collects are not individually identifiable in the Budget Appendix, but fall under offsetting collections. OMB and Treasury reports, and the systems that support them, are designed for budget and financial information and not for an inventory of fees, fines, and penalties that includes the data elements that Congress may use in oversight. OMB staff said the agency does not have a requirement to prioritize reporting fee, fine, and penalty data over more detailed information on other types of funds. OMB staff said while they generally agree that additional data elements would be useful for oversight, there are trade-offs between transparency and the burden of collecting and reporting additional information. Better Reporting of Government-wide Data on Fees, Fines, and Penalties Would Increase Transparency and Data Available for Oversight, but Would Require an Investment of Federal Resources Benefits Include Increased Transparency and Better Information for Oversight and Decision-Making According to OMB staff and officials from Treasury, the Congressional Research Service, and external organizations with expertise in federal budget issues and data transparency, there are two primary benefits to government-wide reporting of fee, fine, and penalty data: increased transparency and better information for congressional oversight and decision-making. Generally, all congressional staff we spoke with said making additional government-wide data on fees, fines, and penalties, such as those data elements described previously, without additional outreach to agencies, would be useful and increase transparency. While some congressional staff said such data elements are available through direct outreach to agencies, other congressional staff told us they could not always obtain the information they wanted. For example, staff from a congressional committee said that one of the most critical data elements for the purpose of congressional oversight is information on agency reporting of obligations and expenditures because, in their view, currently many agencies do not adequately report this information and some agencies do not report this information at all. These data would provide Congress a more complete picture of individual agencies’ activities and any potential overlap or duplication in multiple agencies’ activities. Congressional staff also said having government-wide data on collections of fees could inform efforts that are crosscutting in nature. For example, APHIS and Customs and Border Protection jointly implement the AQI program to help prevent the introduction of harmful agricultural pests and diseases into the United States, and AQI fee collections are divided between the two agencies. Publicly available data on government-wide collections of fines and penalties could inform the public on agency enforcement activities and compliance of regulated parties, such as those related to health or safety. Some officials from external organizations and congressional staff said that it would be useful to have government-wide data on individual fines and penalties levied by agencies. For example, the Environmental Protection Agency publishes an online database on its compliance and enforcement actions, Enforcement and Compliance History Online (ECHO). According to the website, the data available on ECHO allows the public to monitor environmental compliance in communities, corporations to monitor compliance across facilities they own, and investors to more easily factor environmental performance into decisions. Further, an official from an external organization with expertise in data transparency stated that, ideally, a user would be able to link fine and penalty data to spending data on USAspending.gov to increase transparency in instances where an organization receiving a federal grant or contract has also had a fine or penalty levied against it. Last, publicly available government-wide data on collections could inform the public, specifically payers of fees, fines, and penalties, and facilitate their participation in public comment opportunities. For example, OMB staff said government-wide data could provide the public with clear, transparent information across agencies on fee collections and allow the public to analyze differences in fee programs among agencies. Payers of fees may be able to make more informed comments on proposed changes to a fee program if they had information on how it relates to other fee programs across the federal government. Government-wide fee, fine, and penalty data would provide more information to facilitate congressional oversight. These data could help Congress identify trends in collections and significant changes that could be an indication of an agency’s performance. For example, staff of a Congressional committee stated that fine and penalty data can be used to examine enforcement actions on a particular issue or to identify potential trends over time as an indicator of stronger or weaker enforcement actions by an agency. Congress could also use these data to identify variations in enforcement action among geographic regions or as an indicator of the frequency of violations. Additionally, data on review and reporting requirements can inform congressional oversight of fees, fines, and penalties. We previously reported that regular comprehensive reviews of fees provide opportunities for agencies and Congress to make improvements to a fee’s design which, if left unaddressed, could contribute to inefficient use of government resources. For example, fee reviews could help ensure that fees are properly set to cover the total costs of those activities which are intended to be fully fee-funded. Fee reviews may also allow agencies and Congress to identify where similar activities are funded differently; for example, one by fees and one by appropriations. One such example is the export control system, in which the State Department charges fees for the export of items on the U.S. Munitions List, while the Commerce Department does not charge fees for those items exported under its jurisdiction. Government-wide reporting of fee, fine, and penalty data could also inform Congress’s funding decisions by providing a clearer picture of agencies’ total resources. Congressional staff stated that knowing the statutory authority to collect and obligate funding from fees, fines, and penalties—along with any appropriation an agency may have received from an annual appropriation act, which are currently available to congressional staff—would provide a more complete picture of an agency’s total annual funding, including the portion attributed to the taxpayer and the portion attributed to payers of specific fees, fines, and penalties. For example, staff from congressional committees we spoke with said it would be useful to have data to show programs that receive appropriations from both offsetting collections and appropriations not derived from offsetting collections to inform decisions on how the program is funded. Congressional staff also said this would provide more opportunities to track the flow of money in and out of the government. Overall funding decisions may be affected if an agency has an increase in fee collections, for example. Congressional committee staff also said it would be useful to have government-wide data on specific fees, fines, and penalties that are offsetting collections because these collections are available for obligation without going through the annual appropriations process. Our prior work has shown that it is important to consider how the agencies and entities with this authority facilitate oversight to ensure effective management, transparency, and public accountability. Some committee staff said they can request data directly from agencies when they need more disaggregated information on fees, fines, and penalties, and reported different levels of responsiveness from agencies. Publicly available data could reduce potentially overlapping or duplicative requests from staff to agencies. Potential Challenges Exist for Standardizing Definitions of Fees, Fines, and Penalties According to officials from agencies and external organizations, there are potential challenges to defining the government-wide data standard or definition of fee, fine, and penalty programs by which agencies could report. Because there is no statutory requirement for government-wide reporting of fee, fine, and penalty data, agencies collect and use these data for their own purposes, and are not using government-wide data elements and standards that are consistent and comparable between agencies. First, an agency may define a fee program as a single fee or a set of related fees. For example, the U.S. Citizenship and Immigration Services charges more than 40 immigration and naturalization fees to applicants and petitioners that could be grouped together as related fees or split into up to 40 different fee programs. Second, officials from external organizations said there are also challenges in defining data standards the level of detail to report. For example, an official from an external organization said, for large financial penalties, it may be useful for oversight for the data to identify each instance of the penalty, including the fined party. However, that level of detail could raise privacy sensitivities. For example, reporting every individual that paid an entrance fee at a national park could present privacy concerns. Finally, for elements that are useful for congressional oversight, one challenge could be the timing of when funds are collected compared to when they are available for obligation. The amount of funds collected in a year does not necessarily equal the amount available to the agency that year. For example, collections of Harbor Maintenance Fees are deposited to the Harbor Maintenance Trust Fund and are not available for obligation without appropriation. Funds collected in one year may not be necessarily appropriated and obligated until a subsequent year. Our prior work on the Digital Accountability and Transparency Act of 2014 (DATA Act) implementation underscores the importance of standardized and clearly defined data elements. We found inconsistent and potentially confusing instructions from OMB regarding the Primary Place of Performance data elements that resulted in inconsistent reporting among agencies. The standard established by OMB and Treasury defines Primary Place of Performance as “where the predominant performance of the award will be accomplished” while other instructions define it as “the location of the principal plant or place of business where the items will be produced, supplied from stock, or where the service will be performed.” We found some agencies used the first definition and some used the second. In one case, the Departments of Labor and Health and Human Services issued contracts to the same company for similar office printers, but one reported the primary place of performance as California, the location of the office where the printers were delivered and used. The other agency reported the primary place of performance as New Jersey, the location of the company that supplied the printers. As a result, the data were not comparable between agencies or across the federal government, limiting the usefulness for congressional oversight. We previously recommended that OMB and Treasury provide additional instruction to agencies on how to report Primary Place of Performance to ensure the definitions are clear and the data standards are implemented consistently by agencies. Staff from one congressional committee cautioned that attempts to present information on budget authorities for fees, fines, and penalties in a simple and accessible database create an unacceptable risk of confusion and legislative error. The staff said an accurate description of the nature of the spending–-including whether there is authority to obligate without further appropriation–-would be labor intensive and require significant legal analysis and research. Government-wide Reporting Would Require an Investment of Federal Resources Government-wide reporting of fees, fines, and penalties could increase transparency and facilitate oversight and decision-making, but would require time and resources to develop given that there is currently no government-wide system or requirements for agencies to collect and report detailed fee, fine, and penalty data. The level of federal investment would vary depending on factors, such as the number of data elements included and the level of detail reported. Developing a comprehensive and accessible data source would provide greater benefits, but would likely be resource intensive. We have reported on other federal transparency efforts that could provide strategies for reporting government-wide fee, fine, and penalty data. For example, to create a clear and accessible government-wide data source that includes the data elements we identified that would be useful for congressional oversight, Treasury officials said the process would be similar to the implementation of the DATA Act for spending data. To implement the DATA Act, OMB and Treasury led an intensive effort starting in May 2014 through May 2017 when the first government-wide data were reported under the DATA Act’s new standards. Data Standards: OMB, in coordination with Treasury, established 57 standardized data element definitions and approximately 400 associated sub-elements for reporting federal spending information. OMB and Treasury created opportunities for non-federal stakeholders to provide input into the development of data standards, including publishing a Federal Register notice seeking public comment on the establishment of financial data standards; presenting periodic updates on the status of DATA Act implementation to federal and non-federal stakeholders at meetings and conferences; soliciting public comment on data standards using an online collaboration space; and collaborating with federal agencies on the development of data standards and the technical schema through MAX.gov, an OMB- supported website. Technical Process for Reporting: Treasury developed the initial DATA Act Information Model Schema, which provided information on how to standardize the way financial assistance awards, contracts, and other financial and nonfinancial data would be collected and reported under the DATA Act. System to Collect and Validate Data: Treasury developed a system that collects and validates agency data (the DATA Act Broker), which operationalizes the reporting framework laid out in the schema. In addition, Treasury employed online software development tools to provide responses to stakeholder questions and comments related to the development and revision of the broker. Public Reporting: Treasury created and updated the new USAspending.gov website to display certified agency data submitted under the DATA Act. Agencies also took steps to prepare to report spending data. They reviewed data elements OMB identified, participated in standardizing the definitions, performed an inventory of their existing data and associated business processes, and updated their systems and processes to report data to Treasury. OMB and Treasury issued policy directions to help agencies meet their reporting requirements under the act. They also conducted a series of meetings with participating agencies to obtain information on any challenges that could impede effective implementation and assess agencies’ readiness to report required spending data. Although the steps to developing comprehensive, detailed reporting on government-wide collections of fees, fines, and penalties might be similar to the DATA Act efforts, the dollar amounts of collections would be smaller than those of federal spending. In fiscal year 2017, federal spending was $3.98 trillion compared to about $350 billion in collections of fees, fines, penalties, and forfeitures reported by OMB. On the other hand, defining data elements and standards for fee, fine, and penalty data could be more resource intensive than developing data standards for DATA Act implementation because the DATA Act built on earlier reporting requirements. The DATA Act amended the Federal Funding Accountability and Transparency Act of 2006 (FFATA), which required OMB to establish the website USAspending.gov to report data on federal awards, including contracts, grants, and loans. The DATA Act required OMB and Treasury to standardize data required to be reported by FFATA. For fee, fine, and penalty data, OMB and Treasury would be starting without the benefit of some data elements already defined. Further, we have previously reported that effective implementation of provisions to make federal data publicly available, including the DATA Act and GPRAMA’s program inventory, especially the ability to crosswalk spending data to individual programs, could provide vital information to assist federal decision makers in addressing significant challenges the government faces. Incorporating a small number of data elements that Congress identifies as most useful for oversight into ongoing government-wide agency reporting efforts could incrementally improve transparency and information for oversight and decision-making, with fewer resources. For example, Congress required agencies to add selected data elements to their annual financial reports on civil monetary penalties. Specifically, the Federal Civil Penalties Adjustment Act Improvements Act of 2015 requires agencies to include information about the civil monetary penalties within the agencies’ jurisdiction, including catch-up inflation adjustment of the civil monetary penalty amounts, in annual agency financial reports or performance and accountability reports. As shown in figure 7, to facilitate agencies’ reporting, OMB provided a table to define the data elements required in the act in its annual instructions, OMB Circular No. A-136, Financial Reporting Requirements. Agencies started reporting these data in their agency financial reports in fiscal year 2016. In July 2018, we reported that 40 of 45 required agencies reported in their fiscal year 2017 agency financial report information on civil monetary penalties as directed by the OMB instructions. Similarly, if Congress sought additional fine and penalty data elements, such as amounts collected and authority to spend collections, OMB could expand this table in Circular No. A-136 to include those data elements. Circular No. A-136 also outlines that agencies may include the results of biennial reviews of fees and other collections in their agency financial reports. OMB could also update this portion of the circular to require agencies to report specific data elements that are useful for oversight, such as review and reporting requirements. While this information reported in agency financial reports would be disaggregated in portable document format, or PDF, documents, it would provide some transparency on agencies’ activities that Congress could use to prioritize its oversight efforts. In another example, if OMB implements the federal program inventory as required by GPRAMA, it could include a data element on whether a program has a fee, fine, or penalty. We previously reported that the principles and practices of information architecture—a discipline focused on organizing and structuring information—offer an approach for developing such an inventory to support a variety of uses, including increased transparency for federal programs. A program inventory creates the potential to aggregate, disaggregate, sort, and filter information across multiple program facets. For example, from a user’s perspective, a program could be tagged to highlight whether it includes activities to collect fees, fines, or penalties. Then, a user interested in this data facet could select a tag (e.g., fees) that could generate a list of programs that also have fees, fines, or penalties. While the program inventory is broader than agency collections of fees, fines, and penalties and would include programmatic descriptions, it would increase transparency by enabling Congress and the public to identify and isolate all programs that include, as a source of funding or a key data element, a fee, fine, or penalty to inform oversight and target additional requests for information to agencies. Conclusions Federal agencies are authorized to collect hundreds of billions of dollars from fees, fines, and penalties each year that fund a wide variety of programs, but Congress and the American public do not have government-wide data on these collections that would provide increased transparency and facilitate oversight. OMB’s MAX database contains some disaggregated data labeled as fees, fines, and penalties, but OMB does not make these data publicly available. Without more disaggregated, government-wide, accessible data on collections of fees, fines, and penalties, such as by agency, Congress and the public do not have a complete and accurate picture of federal finances, the sources of federal funds, and the resources available to fund federal programs. In addition, improving the data OMB currently reports related to fees, fines, and penalties could help the user better understand the data and the potential limitations. First, until OMB describes how it identifies accounts with fees including that the government-wide totals of fees it reports in Analytical Perspectives may include collections that are not fees and exclude some fee collections, some users will likely be unaware that reported totals could be over- or under-estimates. Second, without OMB instruction to agencies to regularly review and update implementation of the criteria for designating accounts that contain fees, accounts could be designated incorrectly if the makeup of the collections changes. Therefore, OMB cannot provide reasonable assurance that the total amount of fees it reports is accurate. Third, until OMB describes in the User’s Guide that its Public Budget Database reports budget authority net of offsetting collections, including collections of fees, fines, and penalties, users could misinterpret the information and underestimate collections in some cases. OMB and Treasury do not collect many of the data elements on fees, fines, and penalties that would be useful for congressional oversight, such as review and reporting requirements. There are trade-offs between the potential costs and the potential benefits. While reporting government- wide data on specific fees, fines, and penalties would improve transparency and information for decision-making, more data elements would require greater investment of resources from OMB, Treasury, and agencies. Any new reporting of fee, fine, and penalty data would be most useful if it is designed to be compatible with other transparency efforts— the DATA Act reporting and the federal program inventory. Regardless of the approach taken, linkage of data on fees, fines, and penalties with other government-wide data reporting, such as USASpending.gov, would enhance transparency and facilitate congressional oversight. Recommendations for Executive Action We are making the following four recommendations to OMB: The Director of OMB should make available more disaggregated data on fees, fines, and penalties that it maintains in its OMB MAX database. For example, OMB could report data on fee collections by agency in Analytical Perspectives. (Recommendation 1) The Director of OMB should present, in Analytical Perspectives, the data limitations related to the government-wide fee totals by describing the 50- percent criteria OMB uses to identify accounts with fees or by directing users to the relevant sections of OMB Circular No. A-11. (Recommendation 2) The Director of OMB should instruct agencies to regularly review the application of the user fee designation in the OMB MAX data and update the designation, as needed, to meet the criteria in OMB Circular No. A-11. (Recommendation 3) The Director of OMB should describe in the Public Budget Database User’s Guide that budget authority is reported net of any offsetting collections, such as collections of fees, fines, and penalties. (Recommendation 4) Agency Comments We provided a draft of this report to Treasury and OMB for review and comment on December 10, 2018. Treasury informed us that they had no comments. As of March 4, 2019, OMB did not provide comments. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Department of the Treasury, and the Director of the Office of Management and Budget. In addition, the report is available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6806 or nguyentt@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Appendix I: Objectives, Scope and Methodology This report examines: (1) the extent to which government-wide data on collections of fees, fines, and penalties are publicly available and useful for the purpose of congressional oversight, and (2) the benefits and challenges to government-wide reporting of specific fees, fines, and penalties including data elements that facilitate congressional oversight. To assess the extent and usefulness of publicly available data, we developed criteria for the availability and usefulness for the purpose of congressional oversight of data on collections of fees, fines, and penalties reported in government-wide sources (see table 3). The first three criteria—clear and accessible presentation, complete, and accurate—address the availability of the data and the final criterion, useful for the purpose of congressional oversight, addresses content of the data specific to congressional oversight needs. These criteria are based on: Standards for Internal Control in the Federal Government related to Digital Accountability and Transparency Act of 2014 (DATA Act) government-wide instruction from the Office of Management and Budget (OMB) on public access to data and open government, our prior work on user fees, fines, and penalties, and input from staff of congressional committees on appropriations, budget, and oversight. Using a standard list of semistructured interview questions, we interviewed congressional staff that were available to meet with us on or before November 1, 2018. We shared the criteria with OMB staff and Department of the Treasury (Treasury) officials, and they agreed the criteria are relevant and reasonable. To identify publicly available government-wide sources of data with information on collections of fees, fines, and penalties, we reviewed our prior work on user fees, fines, penalties, and permanent funding authorities, conducted general background research including reviewing Congressional Budget Office (CBO) and Congressional Research Service (CRS) reports, and interviewed staff from OMB, and officials from Treasury, CBO, and CRS. We identified the Budget of the U.S. Government—including Analytical Perspectives, the Budget Appendix, and the Public Budget Database—produced annually by OMB; the Financial Report of the U.S. Government (Financial Report), the Daily Treasury Statement, the Monthly Treasury Statement, the Combined Statement of Receipts, Outlays, and Balances, and USAspending.gov produced by Treasury; and CBO products, such as its budget projections and historical budget tables as containing government-wide federal budget or financial data. Of the sources we identified, we included Analytical Perspectives, the Budget Appendix, the Public Budget Database, and the Combined Statement of Receipts, Outlays, and Balances in our study because they contain government-wide information on collections of fees, fines, and penalties. We excluded the Treasury’s Daily Treasury Statement, Monthly Treasury Statement, Financial Report, and USAspending.gov from this review because we determined that the information presented did not differentiate between types of collections in a way that would allow us to separately identify fees, fines, and penalties. For example, Treasury’s Financial Report reports government-wide information in categories that are broader than fees, fines, and penalties. Specifically, it reports “earned revenue,” which includes collections of interest payments for federal loan programs. Such collections are not fees. The Financial Report also reports fines and penalties combined with interest and other revenues. We also reviewed and excluded CBO products because the data reported are not designed to differentiate between types of collections. We assessed Analytical Perspectives, the Budget Appendix, the Public Budget Database, and the Combined Statement of Receipts, Outlays, and Balances using the criteria we developed for clear and accessible presentation, accurate, and complete. We also assessed the Budget Appendix, the Public Budget Database, and the Combined Statement of Receipts, Outlays, and Balances using the criteria for useful for the purpose of congressional oversight. Further, we assessed relevant portions of OMB and Treasury instructions using Standards for Internal Control in the Federal Government. We also used OMB and Treasury data to identify and report government- wide totals for fees, fines, and penalties to the extent that they were reported. To assess the reliability of OMB’s MAX database data related to the collections of fees, fines, and penalties, we reviewed related documentation, interviewed knowledgeable agency officials, and conducted electronic data testing. To assess Treasury’s Bureau of the Fiscal Service data related to the collections of fees, fines, and penalties, we reviewed related documentation and interviewed knowledgeable agency officials. In both cases, we found the data to be reliable for our purposes. We did not examine whether agencies accurately report collections as fees, fines, and penalties to OMB and Treasury. In addition, we identified and reviewed other sources of data on fees, fines, and penalties that are specific to federal agencies, including annual financial reports and agency websites. We did not apply the criteria we developed for available and useful for the purpose of congressional oversight to these sources because they contain data for an individual agency rather than government-wide data. To determine the benefits and challenges to government-wide reporting of fees, fines, and penalties, we interviewed staff of congressional committees on appropriations, budget, and oversight, OMB staff and Treasury officials, staff of CBO, and external organizations, including the Committee for a Responsible Federal Budget, the Data Coalition, the Data Foundation, the Project on Government Oversight, the Peter G. Peterson Foundation, and the Sunlight Foundation, on the potential benefits and challenges of government-wide reporting of fees, fines, and penalties. In addition, we reviewed our prior work on the DATA Act, federal program inventories, and federal fees, to identify and assess issues to consider in government-wide reporting. We conducted this performance audit from November 2017 to March 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Susan E. Murphy (Assistant Director), Barbara Lancaster (Analyst in Charge), Michael Bechetti, Jacqueline Chapin, Colleen Corcoran, Ann Marie Cortez, Lorraine Ettaro, John Mingus, and Rachel Stoiko made key contributions to this report.
Why GAO Did This Study Congress has authorized federal agencies to collect hundreds of billions of dollars annually in fees, fines, and penalties. These collections can fund a variety of programs, including programs related to national security, and the protection of natural resources. Data on collections are important for congressional oversight and to provide transparency in agencies' use of federal resources. GAO was asked to review the availability of government-wide data on fees, fines, and penalties. This report examines (1) the extent to which data on collections of fees, fines, and penalties are publically available and useful for the purpose of congressional oversight; and (2) the benefits and challenges to government-wide reporting of fees, fines, and penalties. GAO assessed government-wide fee, fine, and penalty data against criteria for availability and usefulness based on multiple sources, including prior GAO work and input from staff of selected congressional committees. GAO interviewed OMB staff, Treasury officials, and representatives of organizations with expertise in federal budget issues and reviewed prior GAO work to identify benefits and challenges of reporting these data. What GAO Found There are no comprehensive, government-wide data at the level of detail that identifies specific fees, fines, or penalties. The Office of Management and Budget (OMB) and the Department of the Treasury (Treasury) report data that include these collections at the budget account level, which generally covers a set of agency activities or programs. OMB and Treasury also report some summary data for budgeting and financial management purposes. In the Budget of the U.S. Government , for example, OMB data showed government-wide fees totaled just over $335 billion in fiscal year 2017. These reports, however, are not designed to inventory or analyze fee, fine, or penalty collections and have significant limitations for that purpose. Although OMB collects more disaggregated data on fees, fines, and penalties, it does not make the data publicly available. OMB uses the disaggregated data in its OMB MAX database—such as the agency and account—to compile reported totals, such as the government-wide fees total in the Budget of the U.S. Government . Until OMB makes more disaggregated data publicly available, Congress has limited information on collections by agency to inform oversight and decision-making. OMB's government-wide total of fees includes collections that are not fees and excludes some fee collections. The total includes all collections for accounts in which fees make up at least half of the account's collections and excludes all others. OMB does not direct agencies to regularly review and update the accounts included in the total. Therefore, if accounts' makeups change such that fee collections drop below, or rise above, the 50 percent threshold, accounts may have incorrect fee designations and the total may be inaccurate. Further, OMB does not disclose the limitation that the total may exclude some fees and include other collections that are not fees. As a result, some users of the data are likely unaware of the potential for the total fees to be overestimated or underestimated. Further, no source of government-wide data consistently reports data elements on fees, fines, and penalties that could help inform congressional oversight. Generally, congressional staff told us that additional data, such as amounts of specific penalties, would increase transparency and facilitate oversight. These data could help Congress identify trends in collections and significant changes that could be an indication of an agency's performance. While reporting government-wide fee, fine, and penalty data provides benefits, there are trade-offs in terms of the time and federal resources it would take to develop and implement a process for agencies to report these data. The level of federal investment would vary depending on factors, such as the number of data elements included and the level of detail reported. Developing a comprehensive and accessible data source would provide greater benefits, but would likely be resource intensive. Alternatively, incorporating a small number of data elements that Congress identifies as most useful for oversight into ongoing government-wide reporting efforts could incrementally improve transparency and information for oversight and decision-making, with fewer resources. What GAO Recommends GAO is making four recommendations to enhance OMB reporting on fees, fines, and penalties, including making disaggregated data publically available, updating instructions to federal agencies to review accounts designated as containing fees, and disclosing limitations in data reported. OMB did not provide comments.
gao_GAO-18-518
gao_GAO-18-518_0
Background FSA seeks to ensure that all eligible individuals enrolled in postsecondary education can benefit from federal financial aid for education. It is responsible for implementing and managing programs authorized under the Higher Education Act of 1965, as amended. Specifically, Title IV of the act authorizes the federal student assistance programs for which FSA is responsible. These programs (Title IV programs) provide loans, grants, and work-study funds to students attending college or career school. In fulfilling its program obligations, FSA is responsible for managing and overseeing almost $1.4 trillion in outstanding loans. In administering Title IV programs, FSA performs a variety of functions across the student aid life cycle. These include educating students and families about the process of obtaining processing millions of student aid applications; disbursing billions of dollars in aid; enforcing financial aid rules and regulations; servicing millions of student loans and helping borrowers avoid default; securing repayment from borrowers who have defaulted on loans; partnering with schools, lenders, and guaranty agencies to prevent fraud, waste, and abuse; and insuring billions of dollars in guaranteed student loans previously issued by financial institutions. In carrying out these functions, FSA collects, maintains, and shares a large amount of information, including sensitive personal information from students and their families. The office also relies on various automated systems to assist with student aid functions. Further, FSA works with various entities, such as loan servicers, guaranty agencies, private collection agencies, and lenders, to carry out loan servicing and collection activities. Federal Student Financial Aid Programs The three main categories of federal student financial aid are loans, grants, and federal work-study. Loans are student aid funds that are borrowed to help pay for eligible education programs and must be repaid with interest. FSA administers loans under the William D. Ford Direct Loan Program (Direct Loan) and the Federal Family Education Loan (FFEL) Program, along with other programs, such as Perkins Loans, for students demonstrating financial need. Direct Loans are loans for which the Department of Education is the lender. They include subsidized loans made to undergraduate students based on financial need, for which the government does not generally charge interest while the student is in grace or deferment status; unsubsidized loans made to undergraduate and graduate students for which the borrower is fully responsible for paying interest regardless of loan status; PLUS loans made to graduate or professional students and parents of dependent undergraduate students for which the borrower is fully responsible for paying the interest regardless of the loan status; and consolidation loans, which allow the borrower to combine existing federal student loans into a single new loan. FFEL loans are loans that were obtained through private lenders, with federal subsidies ensuring that private lenders earned a certain yield on the loans they made. Under this program, the Department of Education entered into agreements with guaranty agencies to insure the private lenders against losses due to a borrower’s default. Federal law ended the origination of these loans as of July 1, 2010; however, FSA, lenders, and guaranty agencies continue to service (i.e., handle billing and other activities related to loan repayment) and collect outstanding FFEL loans. According to FSA, borrowers’ eligibility is the same under both the Direct Loan and FFEL programs. The department also administers student aid through grants, such as Pell grants, which are student aid funds that generally do not have to be repaid. It also administers the federal work-study program, which provides part-time jobs for students with financial need, allowing them to earn money to help pay educational expenses. In fiscal year 2017, FSA reported disbursing about $122.5 billion in aid to students through its various programs. In addition, the portfolio of outstanding FFEL loans totaled approximately $305.8 billion, as of September 30, 2017. Table 1 provides details on the amounts of financial aid disbursed to students in fiscal year 2017 across all financial aid programs. Overview of the Financial Aid Process The federal financial aid process is complex and consists of four phases: school eligibility determination, student application and eligibility determination, disbursement of funds, and repayment and collection of loans. Each phase of the process is supported by automated FSA information systems that collect and process student aid information. The information is then used by FSA, schools, and other stakeholders to determine the type and amount of aid a student is eligible to receive, and to support the distribution and repayment of loans. See figure 1 for an overview of the four phases. Federal Requirements for Protecting Information and Systems Federal laws and guidance specify requirements for protecting federal systems and data. This includes systems used or operated by a contractor or other organization on behalf of a federal agency. FISMA is intended to provide a comprehensive framework for ensuring the effectiveness of security controls over information resources that support federal operations and assets, as well as the effective oversight of information security risks. The act requires each agency to develop, document, and implement an agency-wide information security program to provide risk-based protections for the information and information systems that support the operations and assets of the agency, including those provided or managed by another entity. The primary laws that provide privacy protections for personal information accessed or held by the federal government are the Privacy Act of 1974 and the E-Government Act of 2002. These laws describe, among other things, agency responsibilities with regard to protecting PII. The Privacy Act places limitations on agencies’ collection, disclosure, and use of personal information maintained in systems of records. It requires, among other things, that agencies issue system of records notices to notify the public when the agencies establish or make changes to a system of records. System of records notices are to identify, among other things, the types of data collected, the types of individuals about whom information is collected, the intended “routine” uses of the data, and procedures that individuals can use to review and correct personal information. In addition, the E-Government Act of 2002 requires agencies to conduct assessments of the impact on privacy from using information systems to collect, process, and maintain PII. A privacy impact assessment is an analysis of how personal information is collected, stored, shared, and managed in a federal system. In accordance with FISMA, OMB is responsible for the oversight of agencies’ information security policies and practices. OMB establishes requirements for federal information security programs and assigns agency responsibilities to fulfill the requirements of statutes such as FISMA. OMB requires agencies to oversee the implementation of security and privacy controls by contractors and other non-federal entities that collect, use, process, store, maintain, and disseminate federal information on behalf of a federal agency. OMB notes that agencies are ultimately responsible for ensuring that federal information is adequately protected, commensurate with the risk resulting from the unauthorized access, use, disclosure, modification, or destruction of such information. Accordingly, OMB guidance states that, when sharing PII with contractors or other non-federal entities, agencies should establish requirements for the protection of their data in written agreements with these entities. For specific technical direction, OMB requires agencies to implement standards and guidelines established by NIST. FISMA also assigns certain responsibilities to NIST, including to develop standards and guidelines for systems other than national security systems. These standards and guidelines include (1) standards for categorizing agency information and systems to provide appropriate levels of information security, according to a range of risk levels; (2) guidelines for the types of information and systems to be included in each category; and (3) minimum information security requirements for information and systems in each category. Accordingly, NIST has developed a series of information security standards and guidelines for agencies to follow in managing information security risk. NIST guidance provides steps that agencies can take to identify appropriate security and privacy controls and establish specific requirements for implementing those controls to ensure consistency both internally and externally to the agency. NIST guidance also outlines requirements for protecting the confidentiality of controlled unclassified information (which includes PII) when it resides in a non-federal system or organization. Relevant publications include the following: Federal Information Processing Standard 199, Standards for Security Categorization of Federal Information and Information Systems, requires agencies to categorize their information systems as low- impact, moderate-impact, or high-impact for the security objectives of confidentiality, integrity, and availability. The potential impact values assigned to the respective security objectives are the highest values from among the security categories that the agency identifies for each type of information residing on those information systems. NIST Special Publication 800-53, Security and Privacy Controls for Federal Information Systems and Organizations, provides a catalog of security and privacy controls for federal information systems and organizations. It also provides a process for selecting controls to protect organizational operations, assets, individuals, other organizations, and the nation from a diverse set of threats. These threats include hostile cyber attacks, natural disasters, structural failures, and human errors. The guidance includes privacy controls to be used in conjunction with the specified security controls to achieve comprehensive security and privacy protection. According to NIST, the privacy controls are based on the Fair Information Practice Principles embodied in the Privacy Act of 1974, the E-Government Act of 2002, and OMB policies. NIST Special Publication 800-37, Guide for Applying the Risk Management Framework to Federal Information Systems: A Security Life Cycle Approach, explains how to apply a risk management framework to federal information systems, including security categorization, security control selection and implementation, security control assessment, information system authorization, and security control monitoring. NIST Special Publication 800-171, Protecting Controlled Unclassified Information in Nonfederal Systems and Organizations, provides federal agencies with recommended security guidance for protecting the confidentiality of controlled unclassified information when it resides in a non-federal system and organization. The Framework for Improving Critical Infrastructure Cybersecurity, serves as a baseline for protecting critical information assets. It is intended to help organizations apply the principles and best practices of risk management to improve the security and resilience of critical infrastructure. The framework outlines a risk-based approach to managing cybersecurity that is composed of three major parts: a framework core, profile, and implementation tiers. Subsequent to the issuance of the NIST cybersecurity framework, a May 2017 executive order required agencies to use the framework to manage cybersecurity risks. It also outlined actions to enhance cybersecurity across federal agencies and critical infrastructure to improve the nation’s cyber posture and capabilities against cybersecurity threats to digital and physical security. In addition, the Gramm-Leach-Bliley Act requires financial institutions— companies that offer consumers financial products or services like loans, financial or investment advice, or insurance—to explain their information- sharing practices to their customers and to safeguard sensitive data. As part of its implementation of the act, the Federal Trade Commission (FTC) issued the Safeguards Rule, which requires financial institutions under FTC’s jurisdiction to have measures in place to keep customer information secure. Specifically, the rule requires financial institutions to develop a documented information security program that describes the administrative, technical, or physical safeguards used to protect customer information. The program must be appropriate to the company’s size and complexity, the nature and scope of its activities, and the sensitivity of the customer information it handles. As part of its program, each company must designate one or more employees to coordinate its information identify and assess the risks to customer information in each relevant area of the company’s operation, and evaluate the effectiveness of the current safeguards for controlling these risks; design and implement information safeguards to control risks and regularly monitor and test their effectiveness; select service providers that can maintain appropriate safeguards, require them to maintain safeguards, and oversee their handling of customer information; and evaluate and adjust the program in light of relevant circumstances, including changes in the firm’s business or operations, or the results of security testing and monitoring. GAO Previously Highlighted the Need to Improve Policies and Procedures for the Protection of Student Aid Data We recently reported on aspects of FSA’s protection of student aid data, noting that weaknesses existed in key processes. Specifically, in November 2017, we reported, among other things, that FSA needed to improve its policies and procedures for the management and protection of student aid data. For example, while the agency had established policies and procedures for key privacy requirements, such as publishing notices to describe how personal information is to be maintained, used, and accessed, it did not always ensure that privacy impact assessments for its information systems included an analysis of privacy risks and mitigation steps. In addition, we reported that FSA’s information security policies and procedures were not always up to date. Further, we noted that the agency needed to strengthen its oversight of schools’ implementation of federal information security requirements to help ensure student aid information was adequately protected. We recommended that the Secretary of Education take seven actions to strengthen FSA’s management and protection of federal student aid records and enhance its oversight of schools. For example, we recommended that the agency incorporate information security program requirements in its reviews of postsecondary schools, and that the Department of Education update its regulation to include protections of personal information as an element of a school’s ability to demonstrate its administrative capability. FSA concurred or generally concurred with five of our seven recommendations, partially concurred with one recommendation, and did not concur with another. Non-School Partners Play Key Roles in the Federal Student Aid Process and Have Access to Large Amounts of Personally Identifiable Information to Facilitate Their Activities FSA’s non-school partners play key roles in the federal student financial aid program, particularly with regard to the servicing, repayment, and collection of student loans. These partners include FFEL lenders, Title IV loan servicers, guaranty agencies, and private collection agencies. FSA shares a variety of PII with the non-school partners to assist them in carrying out their functions. FSA’s Non-School Partners Perform Key Roles Related to Loan Servicing, Repayment, and Collection Non-school partners are involved primarily in the loan servicing, repayment, and collection phases of the federal student aid process. FFEL lenders: During the administration of the FFEL program, these lenders were involved primarily in the disbursement of funds. As part of the program, students and parents obtained federal loans through non-federal lenders, such as the borrower’s school, a bank, credit union, or other lending institution. Generally, lenders provided the loan proceeds to a student’s school, which then credited the student’s account and disbursed the residual amount, if any, to the student. After a loan was disbursed, lenders chose to either service the loan, contract with an outside organization for servicing, or sell the loan. According to FSA, the majority of lenders have third-party servicers that perform servicing, billing, and reporting on their behalf. The lenders also work closely with guaranty agencies, which insure FFEL loans in case of default, and oversee certain aspects of the lenders’ activities. As of June 2018, there were 1,079 lenders participating in the FFEL program. Although FSA purchased a portion of the FFEL loans as a result of disruptions in financial markets during the financial crisis of 2007 and 2008, the majority of the FFEL portfolio continues to be owned and serviced by private lenders. These lenders are required to report quarterly on their portfolios and are to sign participation agreements with FSA requiring that electronic data submitted by the lenders be accurate and conform to applicable laws, regulations, and policies. FSA also noted that lenders are regulated by a variety of entities, such as the FTC, Federal Deposit Insurance Corporation, Federal Reserve, Department of the Treasury, and, in some cases, state agencies. Title IV loan servicers: These organizations are primarily involved in the repayment and collection phase of the aid process. Under the Direct Loan program, after the loan is disbursed, the Department of Education contracts with loan servicers to perform a variety of administrative functions. Loan servicers are responsible for collecting payments on a loan, advising borrowers on resources and benefits to better manage their federal student loan obligations, responding to customer service inquiries, and performing other administrative tasks associated with maintaining a loan on behalf of the Department of Education. In addition, once a Direct Loan becomes delinquent (i.e., the first day after a borrower fails to make a scheduled monthly payment), loan servicers may take several actions pending the loan entering default, such as reaching out to past-due borrowers and entering into repayment arrangements for loans. As of July 2018, FSA contracted with 11 loan servicers. The contracts between FSA and the servicers establish the servicers’ responsibilities in the aid process. The contracts lay out requirements for servicers with regard to financial reporting, internal controls, accounting, and other areas. Guaranty agencies: These agencies are state or private non-profit entities that are primarily involved in the repayment and collection phase of the aid process. As part of the FFEL program, they receive federal funds to play the lead role in administering aspects of the program. These agencies’ functions include insuring private lenders against losses due to a borrower’s default or other losses (the guaranty agencies are, in turn, reinsured by the federal government); providing assistance in preventing delinquent borrowers from going into default; working with defaulted student and parent borrowers to rehabilitate their defaulted loans, restore their credit, and provide them with a fresh start; and reporting actions to credit bureaus. Prior to July 2010, when the origination of FFEL loans stopped, guaranty agencies also were involved in verifying student eligibility for loans and notifying lenders, who would send a promissory note to lenders for their signature and disburse the funds. According to FSA, guaranty agencies continue to work closely with holders of FFEL loans, including supporting them in default aversion activities and overseeing aspects of their operations through monitoring, auditing, and ensuring compliance with regulations. As of July 2018, 24 guaranty agencies were administering FFEL loans. FSA uses participation agreements to govern the agencies’ responsibilities in the aid process. The agreements lay out reporting requirements, records retention periods, and other requirements. For example, guaranty agencies are required to report to the Department of Education on the loans they insure. They are also required to keep records and have them available for inspection by the federal government. Private collection agencies: Private collection agencies are also primarily involved in the repayment and collection phase of the aid process. If borrowers default on their loans after entering the repayment phase, private collection agencies will attempt to enter into voluntary repayment agreements, while ensuring that defaulted borrowers are aware of both the consequences of their failure to repay and the options available to help them get out of default. Other debt resolution functions performed by private collection agencies include determining whether a borrower’s account is eligible for administrative resolutions, such as discharge due to death or total and permanent disability; determining whether a borrower’s account is eligible for involuntary payment methods such as administrative wage garnishment; preparing accounts for litigation; and returning accounts to FSA for failure to convert the account to active repayment status. As of July 2018, FSA had contracts with 18 private collection agencies. These contracts describe the private collection agencies’ responsibilities in the aid process. FSA Shares Extensive Amounts of Personally Identifiable Information about Borrowers with Non- School Partners In administering the federal student aid program, FSA shares a large amount of PII that it collects from students and parents with its non-school partners. This is particularly significant in that FSA directly manages or oversees more than 203 million student loans made to approximately 43 million borrowers. PII collected when students or their parents apply for financial aid includes, but is not limited to the following: Student demographics: Name, address, Social Security number, telephone numbers, email address, marital status, driver’s license number, etc. Student eligibility: Citizenship status, dependency status, high school completion status, selective service registration (if applicable), and whether the student has a drug conviction, among other information. Student finances: Tax return filing status; adjusted gross income; cash, savings, and checking account balances; untaxed income; and current net worth of student’s assets. Parent demographics (if applicable): Name, Social Security number, email address, and marital status. Parent finances: Tax return filing status, adjusted gross income, tax exemptions, and asset information. After the borrower’s eligibility is determined or the funds are disbursed, the PII that the agency collected as part of the process is stored on several of FSA’s internal IT systems. FSA shares the PII stored on its systems with its non-school partners to assist them in carrying out their respective functions. This sharing occurs when the agency grants non-school partners access to specific systems. According to FSA, the data that non-school partners have access to depends on the non-school partner’s relationship with the individual holding the loan. Table 2 provides a description of the FSA systems from which non-school partners receive student aid data, as well as the types of PII they contain. To gain access to FSA systems and data, non-school partners must submit an application to use FSA’s Student Aid Internet Gateway (SAIG). The SAIG application enables the enrolling organization (i.e., the non- school partner) to select services to receive, submit, view, and/or update student financial aid data online, or receive or send information by batch exchange. To gain access to services allowing them to receive, submit, view, and update student aid data, each non-school partner must designate a Primary Data Point Administrator, who is responsible for determining which staff within the non-school partner’s organization are to be given access to FSA’s systems and data. The primary Data Point Administrator is also responsible for ensuring the privacy of the information obtained or provided via the SAIG. According to FSA officials, enrollment for access to borrower data via the SAIG varies based on the type of non-school partner and the functions it performs. Further, the officials stated that non-school partners can only access information about the borrowers with whom they are directly involved. The services that non-school partners can access via the SAIG include the following: Central Processing System data: Processed data from the Free Application for Federal Student Aid are reported to institutions on the Institutional Student Information Record, and corrections to data can be made. Common Origination and Disbursement System data: Origination, disbursement, and other required reporting information for the Direct Loan program can be exchanged electronically between FSA and non-school partners. National Student Loan Data System: Title IV, enrollment history information, and federal grant information can be viewed and updated by non-school partners. Financial Management System: Financial reporting information can be sent by non-school partners to FSA. FSA’s Oversight of Non-School Partners’ Protection of Student Aid Data Is Inconsistent As noted previously, OMB and NIST guidance calls for agencies to oversee third-party entities with which they share PII to ensure that appropriate security and privacy controls are in place. This guidance identifies key practices for overseeing the protection of data by such entities. These practices include the following, among others: Require the implementation of risk-based security and privacy controls: NIST guidance states that agencies should categorize their information and systems based on their risk impact level and require the implementation of security controls that include one of three baseline sets of controls that correspond to the impact level, tailored to the system and organization as appropriate. Independently assess the implementation of security controls: Security control assessments determine the extent to which controls are implemented correctly, operating as intended, and producing the desired outcome. For external entities that store or process federal information, NIST guidance states that agencies can verify that controls have been implemented through independent, third-party assessments or attestations. Develop and implement corrective actions: As part of the process for conducting security control assessments, organizations should develop remedial actions to address identified weaknesses and track them to closure. Monitor the implementation of controls on an ongoing basis: Ongoing monitoring includes ensuring that technical, management, and operational security controls are tested at an organization-defined frequency and results are provided to officials on an ongoing basis. NIST guidance notes that agencies should monitor security control compliance by external entities on an ongoing basis. This can be achieved through reporting the security status of the system and security controls on an ongoing basis. FSA has established policies and procedures for overseeing its non- school partners’ protection of the PII that it shares with the partners. These policies and procedures vary in the extent to which they address the key practices for overseeing the protection of PII. For example, FSA’s policies and procedures for Title IV loan servicers and private collection agencies fully address three of the four key practices. For guaranty agencies, FSA’s procedures require onsite assessments but do not require monitoring controls on an ongoing basis. Finally, for FFEL lenders, FSA has minimal oversight procedures. FSA Established Security Requirements for Loan Servicers and Private Collection Agencies FSA established policies and procedures for overseeing Title IV loan servicers and private collection agencies that generally address the key selected practices for overseeing the protection of data. Specifically, by applying its standard contractor oversight processes, the agency has addressed three of the four key practices that pertain to loan servicers and private collection agencies. FSA partially addressed one practice related to ensuring that the implementation and effectiveness of all controls is monitored on an ongoing basis. Table 3 summarizes the extent to which FSA’s processes address the key practices for loan servicers and private collection agencies. FSA required loan servicers and private collection agencies to implement risk-based security and privacy controls: FSA established security requirements and guidance for loan servicers and private collection agencies. These requirements are communicated through provisions in the contracts that FSA has with the loan servicers and private collection agencies. Specifically, FSA requires loan servicers and private collection agencies to implement security controls in accordance with NIST’s Security and Privacy Controls for Federal Information Systems and Organizations. The contracts also require loan servicers and private collection agencies to adhere to applicable Department of Education and FSA security policies and procedures. For example, the Department of Education’s policy for security system categorization, which applies to contractor- owned systems (such as those owned by loan servicers and private collection agencies), requires that systems containing PII be categorized as, at a minimum, “moderate impact.” This categorization reflects an assessment of the risks associated with a compromise of the information and determines the selection of appropriate security controls for the information system. In addition, FSA developed a standard operating procedure for implementing security requirements based on this determination, which applies to loan servicers and private collection agencies. This process for categorizing systems and selecting and implementing controls is based on NIST’s risk management framework, including steps for selecting, implementing, and assessing controls, and authorizing the information system to operate. FSA required independent assessments of the implementation of security controls: To help ensure that loan servicers and private collection agencies meet minimum security standards, FSA developed procedures for assessing the implementation of security controls based on applicable federal guidance. Specifically, FSA’s security authorizations process includes procedures for an independent assessor to review security controls implemented on the loan servicers’ and private collection agencies’ systems. This includes, among other things, developing a test plan; executing the plan, to include observing security controls; running automated scans; and collecting artifacts and evidence. The independent assessor then is to document the issues, findings, and recommendations for remediation. According to FSA’s procedures, once the assessment of the loan servicer’s or private collection agency’s system is completed, issues have been identified, and a plan of action and milestones (POA&M) has been developed, an FSA authorizing official is to review key documentation and make a decision on whether to authorize the system to operate. This decision is to be based on a determination as to whether the residual risk to agency operations, agency assets, resources, or individuals resulting from the operation of the system is acceptable. Once approved, the authorization to operate the system is valid for 3 years, provided that the conditions, if any, specified in the POA&M are met. FSA established a process for developing and implementing corrective actions: FSA requires loan servicers and private collection agencies to follow a standard operating procedure for documenting and implementing corrective actions to address weaknesses identified during security assessments. This procedure requires the owners of the systems to work with their agencies’ information system security officers and FSA’s internal independent validation and verification teams to document deficiencies and remediation plans in the FSA’s POA&M management tool, review and document evidence to close deficiencies, and provide monthly updates on the status of POA&Ms, along with reasons for any overdue items. FSA officials added that they are reviewing ways to further automate the process for flagging overdue items. In addition, the procedure specifies time frames for system owners to remediate weaknesses based on their criticality. To confirm that a weakness has been addressed, the procedure requires FSA’s independent validation and verification team to review submitted plans and evidence and determine if they are sufficient to close the deficiency. FSA did not fully establish a process for monitoring all controls on an ongoing basis: To monitor security controls between the independent assessments supporting the authorization to operate process, FSA’s contracts with loan servicers require the servicers to have a continuous monitoring program, as defined by NIST SP 800-37. Similarly, FSA’s contracts with private collection agencies require these agencies to enroll their systems in FSA’s Continuous Security Authorization program, which is intended to oversee and monitor the security controls in FSA’s information systems on an ongoing basis. In addition, the contracts require the private collection agencies to ensure that independent testing and monitoring of system security controls is performed on an ongoing basis. The contracts require these tests to cover a subset of the system security controls quarterly so that all controls are tested at least once during a 3-year period. However, according to FSA Technology Office officials, neither loan servicers nor private collection agencies have been enrolled in FSA’s Continuous Security Authorization program, as required. The officials added that they had not established a time frame to incorporate loan servicers and private collection agencies into the agency’s continuous monitoring program. According to the officials, both loan servicers and private collection agencies rely on their own continuous monitoring programs to oversee their systems; however, only the private collection agencies report the results of their monitoring activities to FSA (on a quarterly basis). In addition, FSA does not specify which controls the loan servicers and private collection agencies are to test; rather, it leaves this determination to the non-school partners. FSA policy also requires that loan servicers and private collection agencies respond to an annual self-assessment questionnaire concerning their implementation of NIST security and privacy controls. According to the FSA officials, if deficiencies are noted in the agencies’ responses, FSA works with the non-school partners to create POA&Ms and track remediation efforts through closure. Officials in FSA’s Technology Office added that loan servicers participate in FSA’s Web Application Surveillance Program, in which FSA conducts vulnerability scans of the servicers’ systems and shares findings with the servicers for remediation on a monthly or quarterly basis, depending on the environment being tested. Nevertheless, while these processes can provide helpful information about the loan servicers’ and private collection agencies’ security posture on an ongoing basis, they do not ensure that all security controls implemented on these partners’ systems are tested on a regular basis. For example, according to FSA policy, the Web Application Surveillance Program is intended to simulate the scanning and probing of a web application that might be useful to intruders. However, the program is not intended to ensure that management, operational, and technical controls have been implemented. Without fully establishing policies and procedures for ongoing monitoring of security controls implemented by loan servicers and private collection agencies, FSA has less assurance that these controls are effectively implemented and operating as intended. Further, FSA has a limited ability to ensure that risks associated with these non-school partners’ use of PII have been adequately mitigated. FSA Established Security Requirements for Guaranty Agencies, but Lacks a Process for Ongoing Monitoring of Controls FSA policies and procedures requires guaranty agencies to implement security and privacy controls to protect student aid data, and the agency has recently enhanced its processes to include independent, on-site assessments of those controls and the development of corrective actions for identified weaknesses. However, it lacks processes for monitoring guaranty agencies’ implementation of controls on an ongoing basis. Table 4 summarizes the extent to which FSA’s processes address the four key practices for overseeing the protection of data by guaranty agencies. FSA did not fully specify a required baseline of risk-based security and privacy controls for guaranty agencies: FSA requires, through written agreements, that guaranty agencies participating in the federal student aid program comply with federal security requirements. Specifically, these agreements include an amendment that requires the guaranty agencies to ensure that any information systems that include PII about borrowers implement security and privacy controls specified in NIST guidance. In addition, when applying for access to FSA systems and information through the SAIG, guaranty agencies agree to protect the privacy of all information that has been provided by the Department of Education. In particular, guaranty agencies are required to affirm that administrative, operational, and technical security controls are in place and operating as intended. FSA provides guidance to guaranty agencies on implementing security controls, in the form of a template to be used in completing an annual self-assessment (discussed in more detail below). This template identifies security and privacy controls to be used in the self-assessment, based on the NIST control baseline for moderate-impact systems. The guaranty agencies are expected to inform FSA as to whether they have implemented these controls. However, the agreements FSA has established with guaranty agencies do not specify that information must be maintained at a specific impact level or that guaranty agencies are to implement a particular baseline set of security controls that correspond to an agency established risk-based impact level. As noted previously, once agencies determine the impact level of their information or systems, they should select one of three baselines of security controls (low, moderate, or high) that correspond to the impact level. This baseline can then be tailored based on risk and the specific organizational and system environment. According to FSA officials, the agreements allow the guaranty agencies to determine whether their systems are low, moderate, or high impact. The officials also added that guidance provided to guaranty agencies, such as self-assessment questionnaires—are based on the NIST 800-53 moderate baseline. However, allowing guaranty agencies to determine the specific designation could result in inconsistent implementation of security controls if guaranty agencies choose varying impact levels for their systems. OMB guidance states that agencies should require third parties with whom PII is shared to maintain security at a specified impact level. By not specifying in written agreements the impact level of the information it shares with guaranty agencies, and a corresponding set of minimum security requirements, FSA jeopardizes its ability to ensure that the PII it shares with guaranty agencies will be adequately and consistently protected. FSA established a process for on-site assessment of guaranty agencies’ security and privacy controls: Prior to fiscal year 2018, FSA relied on a self-assessment process, wherein guaranty agencies completed annual questionnaires about their implementation of security and privacy controls. The completed questionnaires were reviewed by FSA staff, who then met with guaranty agency staff over the telephone to discuss any identified weaknesses. As part of this process FSA staff did not collect or review documentation to independently verify whether controls had been appropriately implemented, or conduct on-site reviews to obtain first-hand evidence of the implementation of the controls. However, according to FSA officials, they also conducted targeted, on- site visits to selected guaranty agencies in 2016 and 2017 to verify security control implementation. FSA has recently enhanced its process for assessing guaranty agencies’ implementation of security and privacy controls. FSA officials stated that, in March 2018, they began a series of on-site assessments of guaranty agencies which are to be completed by the end of September 2018. FSA provided the guaranty agencies with a security plan template that outlines roles and responsibilities, methodology, controls to be tested, and the test plan approach for these assessments. In addition, the list of evidence includes required artifacts to demonstrate compliance with NIST requirements. FSA officials stated that they plan to alternate between on-site assessments and self-assessments each year. By enhancing its approach to assessing guaranty agencies’ implementation of security requirements, FSA should be better positioned to ensure that the data shared with these entities are being adequately protected. FSA processes include monitoring of guaranty agency corrective actions: As part of the guaranty agency self-assessment process, FSA established procedures for documenting weaknesses identified during the self-assessments and corrective action plans for addressing the weaknesses. FSA Deputy Chief Information Officer officials stated that they track the corrective action plans in a system that provides weekly status reports that include notifications of overdue corrective actions. The officials added that all actions to correct weaknesses identified during the self-assessments were to be taken within 12 months of identifying the corrective actions. In April 2018, FSA officials stated that they intended to follow a procedure similar to the one used for the self-assessments to document and monitor corrective actions for weaknesses identified during the on-site assessments of guaranty agencies’ security and privacy controls. Specifically, the officials noted that all findings of weaknesses during the on-site assessments are to be turned into POA&Ms, assigned an expected completion date, and tracked to completion by FSA. This procedure, if effectively implemented, should help FSA ensure that gaps in security controls are remediated in a timely manner. FSA did not establish a process for monitoring all guaranty agency controls on an ongoing basis: To monitor guaranty agencies’ compliance between assessments, FSA officials stated that they hold weekly teleconferences with officials from guaranty agencies during which they discuss new security requirements or other issues. FSA Information Technology officials stated that they follow up with guaranty agencies after these calls to ensure that they implement new requirements. In addition, FSA issued guidance to guaranty agencies in January 2018 on conducting vulnerability scans of these agencies’ systems. This guidance addresses vulnerability testing guidelines and scanning requirements, as well as guidance on security control testing. However, FSA does not monitor all security controls by requiring guaranty agencies to report regularly on the status of security controls between on- site assessments. Neither the weekly teleconferences nor the vulnerability scans include testing the implementation of all security and privacy controls on a defined, periodic basis or reporting results to FSA. FSA officials stated that they rely on the on-site and self-assessments to oversee guaranty agencies’ security control implementation because FSA does not have a contractual relationship with guaranty agencies and does not own the guaranty agencies’ systems. However, OMB and NIST note that agencies have a responsibility for ensuring that their information is protected at a consistent level even when such information is shared with non-federal partners. Without fully establishing procedures for ongoing monitoring of guaranty agencies, FSA cannot fully ensure that risks to the student aid data containing PII that it shares with guaranty agencies have been adequately mitigated. FSA Exercises Minimal Oversight of FFEL Lenders’ Protection of Student Aid Data FSA established high-level requirements for FFEL lenders to protect student aid data, but it exercises minimal oversight to ensure implementation of security and privacy protections for these data. Table 5 summarizes the extent to which FSA’s processes for overseeing lenders address key practices for overseeing the protection of data. FSA did not fully specify risk-based security and privacy controls for FFEL lenders: Like other non-school partners, lenders must complete FSA’s SAIG application when applying for access to FSA data and systems. The SAIG application outlines general requirements for ensuring the security and privacy of the data that FSA shares with the lenders. In addition, FFEL lenders enter into participation agreements with FSA which include requirements related to data exchange, such as ensuring that data lenders share with FSA are correct. Also, FSA officials told us that security requirements are communicated to the lenders’ staff via “dear colleague” letters and the security notices that appear when users log on to the agency’s Access and Identity Management System to access PII and other data. However, neither the SAIG application nor the participation agreement requires the FFEL lenders to implement a baseline set of risk-based security and privacy controls based on the impact level of the affected information and systems. FSA Information Technology and Business Operations officials said that they plan to add security and privacy requirements to the FFEL lender participation agreements as part of their next update during the 2018 revision cycle, but they did not specify what requirements would be included in these revised agreements. Until FSA establishes specific requirements for lenders’ protection of data, it will lack assurance that information it shares is being protected in a manner consistent with FSA’s determination of its sensitivity. FSA did not require independent assessments of FFEL lenders’ implementation of controls: FSA does not have policies or procedures for independently assessing lenders’ implementation of protections for student aid data. The SAIG application does not require an independent assessment of the non-school partners’ information security and privacy controls to determine the extent to which the controls are implemented correctly, operating as intended, or producing the desired outcome with respect to security. According to FSA officials, by accepting the terms of use displayed when logging on to FSA systems, users agree to comply with security and privacy requirements. The officials added that FSA monitors activity on the National Student Loan Data System and can remove a user’s access if a case of improper usage is identified. However, FSA’s procedures for monitoring system usage do not include an independent assessment of lenders’ implementation of security controls. Further, while FFEL lenders may be required to undergo various compliance audits and program reviews, FSA has not determined the extent to which these audits or reviews address security and privacy protections; it also does not review the results of such reviews to gain assurance that security and privacy protections are in place. Without requiring evidence of such assessments, FSA does not have a basis for ensuring that lenders are implementing adequate security and privacy protections. FSA has not established a process for overseeing corrective actions taken by FFEL lenders: Since FSA does not require independent assessments of lenders’ information security controls, it does not have a process for identifying weaknesses in the FFEL lenders’ security and privacy controls and monitoring corrective actions. Lenders do not notify FSA of security or privacy weaknesses that may be identified in their systems, nor do they report on corrective actions taken to remedy such weaknesses. In the absence of such reporting, FSA cannot ensure that weaknesses in the security and privacy controls of the lenders’ systems are being addressed. FSA did not establish procedures for monitoring FFEL lenders’ implementation of controls on an ongoing basis: FSA does not have a process for ongoing monitoring of lenders’ implementation of security or privacy safeguards. FSA does not require lenders to provide periodic reports to FSA on their security and privacy posture or to conduct any reviews of their implementation of security and privacy controls. Without requiring evidence that lenders are effectively implementing security and privacy protections, FSA cannot ensure that the data accessed by lenders are being safeguarded commensurate with risk. Regarding the lack of FFEL lender oversight, FSA officials noted that lenders, as financial institutions, are subject to a number of other legal and regulatory requirements that were not defined by FSA as part of the FFEL program. For example, lenders are subject to requirements for protecting customer information imposed by the Gramm-Leach-Bliley Act and FTC’s Safeguards Rule, which calls for financial institutions to document an information security program that includes specific elements. However, FSA does not have a process for ensuring that lenders are complying with these, or other, requirements related to the protection of student aid data. Consequently, FSA lacks assurance that risk-based safeguards commensurate with the sensitivity of these data are being effectively implemented, tested, and monitored. In our previous work, we similarly found that FSA did not have assurance that schools, which are also required to comply with the FTC Safeguards Rule, were implementing these requirements. OMB noted that agencies are ultimately responsible for ensuring that their information is adequately protected, and NIST stated that this responsibility does not change when information is shared with non- federal partners. Accordingly, agencies should have assurance that information they share with non-federal entities is being protected at an appropriate level. In the case of FSA, this could include leveraging processes already in place, such as the FTC Safeguards Rule, to gain assurance that appropriate security and privacy controls are in place and are being regularly monitored and tested. Without establishing a process for gaining such assurance, FSA is not meeting its responsibility to ensure that borrowers’ data are being adequately protected. Conclusions FSA shares PII on millions of people with non-school partners (i.e., loan servicers, private collection agencies, guaranty agencies, and FFEL lenders) so that they can carry out key aspects of the federal student aid program. FSA is responsible for ensuring that its non-school partners protect this information by implementing adequate information security and privacy safeguards. While FSA has taken steps to oversee the security and privacy protections of some of its non-school partners, its policies and procedures did not always include all key oversight practices. In particular, while FSA established requirements for loan servicers and private collection agencies, along with processes for ensuring their implementation that generally adhered to the key practices, the agency had not ensured that controls are tested and results are reported on an ongoing basis. FSA, therefore, may lack visibility into the effectiveness of the protections applied to student aid data. With respect to guaranty agencies, FSA established security and privacy requirements and has taken steps to enhance security assessments. Nevertheless, without ensuring that controls are monitored on an ongoing basis, it lacks adequate assurance that security controls required by FSA are in place and effective. Further, because it exercised minimal oversight over FFEL lenders, FSA has limited assurance that they are protecting student aid data consistent with the agency’s requirements. FSA’s limited oversight could result in inconsistent or ineffective implementation of security controls, which in turn could have serious consequences for the privacy of millions of borrowers whose information is shared with non-school partners. Recommendations for Executive Action We are making the following six recommendations to the Department of Education: The Secretary of Education should enroll loan servicers in FSA’s continuous monitoring program and, in the interim, require these entities to report the results of security controls testing at an FSA- defined frequency. (Recommendation 1) The Secretary of Education should enroll private collection agencies in FSA’s continuous monitoring program, and, in the interim, require these entities to test all controls at an FSA-defined frequency and regularly report the results. (Recommendation 2) The Secretary of Education should modify FSA’s agreements with guaranty agencies to specify a required baseline of security controls based on the impact level of the information shared with these agencies, as determined by FSA. (Recommendation 3) The Secretary of Education should establish a process for continuous monitoring of guaranty agencies’ implementation of security and privacy requirements between on-site assessments, to include testing all controls at an FSA-defined frequency and regularly reporting results. (Recommendation 4) The Secretary of Education should include specific security and privacy requirements in agreements with FFEL lenders based on FSA’s categorization of the information shared with the lenders. (Recommendation 5) The Secretary of Education should develop policies and procedures to gain assurance that FFEL lenders have appropriate security and privacy controls in place and that these controls are being regularly tested and monitored. (Recommendation 6) Agency Comments and Our Evaluation We received written comments on a draft of this report from FSA. In its comments (reprinted in appendix II), FSA concurred with three of our recommendations, partially concurred with two recommendations, and did not concur with one. In addition, FSA provided technical comments, which we have incorporated as appropriate. FSA generally concurred with our first three recommendations and described various actions it planned or had under way to implement them. Specifically, regarding our recommendation to enroll loan servicers in FSA’s continuous monitoring program (recommendation 1), the agency stated that loan servicers are scheduled to be enrolled in its ongoing security authorization program beginning in fiscal year 2019. Regarding our recommendation to enroll private collection agencies in FSA’s continuous monitoring program and, in the interim, require these entities to test all controls at an FSA-defined frequency and regularly report the results (recommendation 2), FSA stated that it concurred, although the actions it said it planned to take would not fully address the recommendation. Specifically, the agency stated that it intends to work with private collection agencies to identify specific relevant criteria to strengthen continuous monitoring testing schedules and include these criteria in private collection agencies’ quarterly reports to FSA. This measure, if implemented effectively, would address the interim measure called for in our recommendation. However, FSA did not describe actions to address the first part of our recommendation. Specifically, it did not state whether it intended to enroll private collection agencies in its ongoing security authorization program, as called for by its contracts with these agencies. Doing so would provide enhanced oversight of their implementation of security and privacy controls. The agency concurred with our recommendation to modify FSA’s agreements with guaranty agencies to specify a required baseline of security controls (recommendation 3). In this regard, FSA stated that the agreements it has established with guaranty agencies require them to comply with standards in NIST Special Publication 800-53, revision 4, and that assessments of the guaranty agencies require compliance with the moderate-impact level control baseline under the applicable NIST standards. Even though FSA did not describe plans to modify its agreements with guaranty agencies to explicitly require a specific baseline of controls, the procedures that it noted should help FSA ensure that guaranty agencies are protecting student aid data based on the office’s determination of risk. We intend to follow up with FSA to obtain and assess the evidence supporting its implementation of these recommendations. FSA stated that it partially concurred with two other recommendations. With respect to establishing a process for continuous monitoring of guaranty agencies’ implementation of security and privacy requirements between on-site assessments, to include testing all controls at an FSA- defined frequency and regularly reporting results (recommendation 4), FSA cited its process for on-site assessments or self-assessments as the means by which it monitors guaranty agencies. Specifically, it stated that it requires guaranty agencies to annually either complete a self- assessment or participate in an on-site assessment. However, FSA did not describe any additional steps it intends to take to monitor guaranty agencies’ implementation of security and privacy controls between assessments. As noted in the report, the self- assessment process that FSA established for guaranty agencies does not include such elements as collecting or reviewing documentation to verify that controls have been appropriately implemented. Further, FSA does not monitor all security controls between on-site assessments by requiring guaranty agencies to report regularly on the status of security controls. Regular reporting on the status of security controls, such as test results, would provide FSA with additional assurance that guaranty agencies have implemented adequate protections. Thus, we believe our recommendation remains appropriate. FSA also stated that it partially concurred with our recommendation to include specific security and privacy requirements in agreements with FFEL lenders based on FSA’s categorization of the information shared with the lenders (recommendation 5). Specifically, FSA stated that it has revised its 2019-2020 Lender Organization Participation Agreement with FFEL lenders to include specific security and privacy responsibilities and requirements, which is to be effective at the beginning of fiscal year 2019. The planned actions that the agency described in its response should fully address our recommendation, if effectively implemented. We intend to follow up with FSA to obtain and assess the evidence supporting its implementation of this recommendation. FSA did not concur with our recommendation to develop policies and procedures to ensure that FFEL lenders have appropriate security and privacy controls in place and that these controls are being regularly tested and monitored (recommendation 6). According to the agency, it lacks statutory authority under the Higher Education Act to monitor FFEL lenders in this area. FSA noted that the lenders are already subject to security and privacy controls that are monitored and enforced through other legal authorities that are not administered by the Department of Education or FSA. However, we continue to believe that our recommendation should be implemented. We recognize that FSA may not have the authority to impose additional requirements related to monitoring the adequacy of security and privacy controls implemented by FFEL lenders. Furthermore, the recommendation does not require FSA or the Department of Education to exercise additional regulatory authority over FFEL lenders or to conduct testing or other assessments of the lenders’ security and privacy programs. Rather, it seeks for FSA to review the results of other compliance audits or program assessments, including, as appropriate, those conducted by other federal entities, to acquire visibility into the lenders’ implementation of information security and privacy safeguards. Leveraging such a process should help provide FSA with assurance that the student aid data it shares with them are being adequately protected. Accordingly, we have clarified our recommendation to better reflect its intent. We are sending copies of this report to the appropriate congressional committees, the Secretary of Education, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9342 or marinosn@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology The objectives of our review were to (1) describe the roles of the Office of Federal Student Aid’s (FSA) non-school partners in the federal student financial aid program, including the types of personally identifiable information (PII) shared with them; and (2) assess the extent to which FSA’s policies and procedures for overseeing non-school partners’ protection of federal student aid data align with federal requirements, federal guidance, and best practices. To address the first objective, we obtained and reviewed various documentation that described the federal student aid process and the types of information collected, used, and shared in the process. To determine the roles played by non-school partners in the federal student aid process, we reviewed reports from the Department of Education and FSA, including FSA’s annual reports for fiscal years 2016 and 2017, and reports from the department’s Office of Inspector General; reports from the Congressional Research Service on federal student aid programs; and prior GAO reports on aspects of federal student aid programs. These non-school partners included entities that FSA directly engages with to carry out key aspects of the student aid process. These partners were non-federal lenders participating in the Federal Family Education Title IV loan servicers, guaranty agencies, and private collection agencies. Specifically, we identified key functions carried out by these partners, the types of agreements they had with FSA, and the numbers of each type of partner that FSA engages with. To determine the types of PII shared with non-school partners, we reviewed FSA documentation on key systems used to collect, store, and process information as part of the student aid process. This included high-level documentation and descriptions of FSA’s systems architecture, privacy impact assessments for FSA and non-school partner systems, and information on the process by which FSA enrolls non-school partners to share student aid data with the agency. We also reviewed previous GAO reports on FSA’s management of student aid data, including PII collected during the aid process. In addition, we interviewed FSA officials, including officials from the agency’s technology and business operations offices. To address the second objective, we reviewed and analyzed the policies, procedures, and processes FSA has in place for overseeing non-school partners’ protection of student aid data and compared them to federal requirements and guidance for ensuring the protection of PII. We identified key activities for overseeing the protection of PII by reviewing laws, including the Federal Information Security Modernization Act of 2014; Office of Management and Budget requirements and guidance on managing federal information; and National Institute of Standards and Technology information security standards and guidance. Based on our review of these requirements and guidance, we identified four key practices for establishing security and privacy requirements for non- federal entities and overseeing the implementation of these requirements. These practices are require the implementation of risk-based security and privacy controls, independently assess the implementation of security controls, develop and implement corrective actions, and monitor the implementation of controls on an ongoing basis. We collected and reviewed evidence provided by FSA (policy and process documents, artifacts, written responses to questions, and verbal responses to questions) to understand its processes for overseeing the non-school partners’ protection of student aid data. We then compared the processes to the four key practices we identified. We determined whether the process met, partially met, or did not meet the key practices: Met – the agency provided evidence of processes and procedures that address all aspects of the key practice. Partially met – the agency provided evidence of processes and procedures that address some, but not all aspects of the key practice. Not met – the agency did not provide evidence of processes and procedures that addressed the key practice. We supplemented our review with interviews of FSA Business Operations and Information Technology officials with knowledge of and responsibility for the oversight of non-school partners. We also reviewed relevant Department of Education inspector general reports. We conducted this performance audit from June 2017 to September 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Office of Federal Student Aid Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, John De Ferrari (assistant director), Chris Businsky, Marisol Cruz, Rebecca Eyler, Lee McCracken, David Plocher, and Bruce Rackliff made key contributions to this report.
Why GAO Did This Study FSA administers billions of dollars in student financial aid, including loans and grants, to eligible college students. The processing of student aid is complex, and FSA relies on non-school partners to carry out various activities supporting the student aid process, such as loan repayment and collection. GAO was asked to review how FSA ensures the protection of PII by its non-school partners. The objectives of this review were to (1) describe the roles of non-school partners and the types of PII shared with them and (2) assess the extent to which FSA policies and procedures for overseeing the non-school partners' protection of student aid data adhere to federal requirements, guidance, and best practices. To address these objectives, GAO collected and reviewed FSA documentation, reports, policies, and procedures and compared FSA policies and procedures to four key practices included in federal guidance for overseeing the protection of PII by non-federal entities. GAO also interviewed FSA officials with responsibility for the oversight of non-school partners. What GAO Found The Department of Education's Office of Federal Student Aid (FSA) partners with various entities (“non-school partners”) that are involved primarily in supporting the repayment and collection of student loans. Federal loan servicers are responsible for collecting payments on loans and providing customer service to borrowers on behalf of the Department of Education through its Direct Loan program. Private collection agencies collect on loans that are in default and work with borrowers to help them get out of default. Guaranty agencies insure lenders against loss due to borrower default and carry out a variety of loan administration activities. Federal Family Education Loan lenders are non-federal lenders, such as banks, credit unions, or other lending institutions, that made loans to students in the past and continue to service these loans. FSA shares a variety of personally identifiable information (PII) on borrowers with its non-school partners. This includes names, addresses, phone numbers, email addresses, Social Security numbers, and financial information. Key practices for overseeing the protection of PII shared with non-federal entities include requiring (1) risk-based security and privacy controls, (2) independent assessments to ensure controls are effectively implemented, (3) corrective actions to address identified weaknesses in controls, and (4) ongoing monitoring of control status. FSA established oversight policies and procedures for loan servicers and private collection agencies that generally address these key practices. However, FSA exercises minimal oversight of lenders' protection of student data (see table). FSA officials maintain that the lenders are subject to other legal and regulatory requirements for protecting customer data. However, FSA does not have a process for ensuring lenders are complying with these requirements, and thus lacks assurance that appropriate risk-based safeguards are being effectively implemented, tested, and monitored. What GAO Recommends GAO is making six recommendations to FSA to ensure that its oversight of non-school partners addresses the four key practices for ensuring the protection of PII. FSA concurred with three of the recommendations, partially concurred with two, and did not concur with one. It also described actions planned or under way to implement four of the recommendations. GAO maintains that all of its recommendations are warranted.
gao_GAO-18-394
gao_GAO-18-394_0
Background Federal Agencies and Key Regulations Related to Lead Paint Hazards While HUD has primary responsibility for addressing lead paint hazards in federally-assisted housing, EPA also has responsibilities related to setting federal lead standards for housing. EPA sets federal standards for lead hazards in paint, soil, and dust. Additionally, EPA regulates the training and certification of workers who remediate lead paint hazards. CDC sets a health guideline known as the “blood lead reference value” to identify children exposed to more lead than most other children. As of 2012, CDC began using a blood lead reference value of 5 micrograms of lead per deciliter of blood. For children whose blood lead level is at or above CDC’s blood lead reference value, health care providers and public health agencies can identify those children who may benefit the most from early intervention. CDC’s blood lead reference value is based on the 97.5th percentile of the blood lead distribution in U.S. children (ages 1 to 5), using data from the National Health and Nutrition Examination Survey. Children with blood lead levels above CDC’s blood lead reference value have blood lead levels in the highest 2.5 percent of all U.S. children (ages 1 to 5). HUD, EPA, and the Department of Health and Human Services (HHS) are members of the President’s Task Force on Environmental Health Risks and Safety Risks to Children. HUD co- chairs the lead subcommittee of this task force with EPA and HHS. The task force published the last national lead strategy in 2000. The primary federal legislation to address lead paint hazards and the related requirements for HUD is the Residential Lead-Based Paint Hazard Reduction Act (Title X of the Housing and Community Development Act of 1992). We refer to this law as Title X throughout this report. Title X required HUD to, among other things, promulgate lead paint regulations, implement the lead hazard control grant programs, and conduct research and reporting, as discussed throughout this report. The two key regulations that HUD has issued under Title X are the Lead Disclosure Rule and the Lead Safe Housing Rule: Lead Disclosure Rule. In 1996, HUD and EPA jointly issued the Lead Disclosure Rule. The rule applies to most housing built before 1978 and requires sellers and lessors to disclose any known information, available records, and reports on the presence of lead paint and lead paint hazards and provide an EPA-approved information pamphlet prior to sale or lease. Lead Safe Housing Rule. In 1999, HUD first issued the Lead Safe Housing Rule, which applies only to housing receiving federal assistance or federally-owned housing being sold. The rule established procedures for evaluating whether a lead paint hazard exists, controlling or eliminating the hazard, and notifying occupants of any lead paint hazards identified and related remediation efforts. The rule established an “elevated blood lead level” as a threshold that requires landlords and PHAs to take certain actions if a child’s blood test shows lead levels meeting or exceeding this threshold. In 2017, HUD amended the rule to align its definition of an “elevated blood lead level” with CDC’s blood lead reference value. This change lowered the threshold that generally required landlords and PHAs to act from 20 micrograms to 5 micrograms of lead per deciliter of blood. According to the rule, when a child under age 6 living in HUD-assisted housing has an elevated blood lead level, the housing provider must take several steps. These generally include testing the home and other potential sources of the child’s lead exposure within 15 days, ensuring that identified lead paint hazards are addressed within 30 days of receiving a report detailing the results of that testing, and reporting the case to HUD. HUD Offices Involved in Lead Efforts and HUD’s Rental Assistance Programs Office of Lead Hazard Control and Healthy Homes (Lead Office). HUD’s Lead Office is primarily responsible for administering HUD’s two lead hazard control grant programs, providing guidance on HUD’s lead paint regulations, and tracking HUD’s efforts to make housing lead-safe. The Lead Office collaborates with HUD program offices on its oversight and enforcement of lead paint regulations. For instance, the Lead Office issues guidance, responds to questions about requirements of lead paint regulations, and provides training and technical assistance to HUD program staff, PHA staff, and property owners. The Lead Office’s oversight efforts also include maintaining email and telephone hotlines to receive complaints and tips from tenants or homeowners, among others, as they pertain to lead paint regulations. Additionally, the Lead Office, in collaboration with EPA, contributes to the operation of the National Lead Information Center––a resource that provides the general public and professionals with information about lead, lead hazards, and their prevention. Office of Public and Indian Housing (PIH). HUD’s PIH oversees and enforces HUD’s lead paint regulations for the rental assistance programs. As discussed earlier, this report focuses on the two largest rental assistance programs serving the most families with children––the Housing Choice Voucher and public housing programs. Housing Choice Voucher program. In the voucher program, eligible families and individuals are given vouchers as rental assistance to use in the private housing market. Generally, eligible families with vouchers live in the housing of their choice in the private market. The voucher generally pays the difference between the family’s contribution toward rent and the actual rent for the unit. Vouchers are portable; once a family receives one, it can take the voucher and move to other areas where the voucher program is administered. In 2017, there were roughly 2.5 million vouchers available. Public housing program. Public housing is reduced-rent developments owned and operated by the local PHA and subsidized by the federal government. PHAs receive several streams of funding from HUD to help make up the difference between what tenants pay in rent and what it costs to maintain public housing. For example, PHAs receive operating and capital funds through a formula allocation process. PHAs use operating funds to pay for management, administration, and day-to-day costs of running a housing development. Capital funds are used for modernization needs, such as replacing roofs or remediating lead paint hazards. According to HUD rules, generally families that are income-eligible to live in public housing pay 30 percent of their adjusted income toward rent. In 2017, there were roughly 1 million public housing units available. For both of these rental assistance programs, the Office of Field Operations (OFO) within PIH oversees PHAs’ compliance with lead paint regulations, in conjunction with HUD field office staff. The office has a risk-based approach to overseeing PHAs and performs quarterly risk assessments. Also within PIH, staff from the Real Estate Assessment Center are responsible for inspecting the physical condition of public housing properties. Office of Policy Development and Research (PD&R). HUD’s PD&R is the primary office responsible for data analysis, research, and program evaluations to inform the development and implementation of programs and policies across HUD offices. HUD’s Lead Hazard Control Grant Programs of the total grant amount, while the Lead Hazard Reduction Demonstration grant program has required at least a 25 percent match. For fiscal years 2013–2017, HUD awarded $527 million for its lead hazard control grants, which included 186 grants to state and local jurisdictions (see fig. 1). In these 5 years, about 40 percent of grants awarded went to jurisdictions in the Northeast and 31 percent to jurisdictions in the Midwest––regions of the country known to have a high prevalence of lead paint hazards. Additionally, in these 5 years, 90 percent of grant awards went to grantees at the local jurisdiction level (cities, counties, and the District of Columbia). The other 10 percent of grant awards went to state governments. During this time period, HUD awarded the most grants to jurisdictions in Ohio (17 grants), Massachusetts and New York (15 grants each), and Connecticut (14 grants). HUD Has Incorporated Relevant Requirements for Awarding Recent Lead Grants, but Could Better Document and Evaluate Grant Processes Lead Grant Programs Have Incorporated Statutory Requirements for Eligibility and Selection HUD’s Lead-Based Paint Hazard Control grant and the Lead Hazard Reduction Demonstration grant programs have incorporated Title X statutory requirements through recent annual funding notices and their grant processes. Title X contains applicant eligibility requirements and selection criteria HUD should use to award lead grants. To be eligible to receive a grant, applicants need to be a state or local jurisdiction, contribute matching funds to supplement the grant award, have an approved comprehensive affordable housing strategy, and have a certified lead abatement program (if the applicant is a state government). HUD has incorporated these eligibility requirements in its grant programs’ 2017 funding notices, which require applicants to demonstrate that they meet these requirements when they apply for a lead grant. According to the 2017 funding notices, applicants must detail the sources and amounts of their matching contributions in their applications. Similarly, applicants must submit a form certifying that the proposed grant activities are consistent with their local affordable housing strategy. HUD’s 2017 funding notices state that if applicants did not meet these eligibility requirements, HUD would not consider their applications. Additionally, Title X requires HUD to award lead grants according to the following applicant selection criteria: the extent to which an applicant’s proposed activities will reduce the risk of lead poisoning for children under the age of 6; the degree of severity and extent of lead paint hazards in the applicant’s jurisdiction; the applicant’s ability to supplement the grant award with state, local, or private funds; the applicant’s ability to carry out the proposed grant activities; and other factors determined by the HUD Secretary to ensure that the grants are used effectively. In its 2017 funding notices, HUD incorporated the Title X applicant selection criteria through five scoring factors that it used to assess lead grant applications. HUD allocated a certain number of points to each scoring factor. Applicants are required to develop their grant proposals in response to the scoring factors. When reviewing applications, HUD staff evaluated an applicant’s response to the factors and assigned points for each factor. See table 1 for a description of the 2017 lead grant programs’ scoring factors and points. As shown in table 1, HUD awarded the most points (46 out of 100) to the “soundness of approach” scoring factor, according to HUD’s 2017 funding notices. Through this factor, HUD incorporated Title X selection criteria on an applicant’s ability to carry out the proposed grant activities and supplement a grant award with state, local, or private funds. For example, HUD’s 2017 funding notices required applicants to describe their detailed plans to implement grant activities, including how the applicants will establish partnerships to make housing lead-safe. Specifically, HUD began awarding 2 of the 100 points to applicants who demonstrated partnerships with local public health agencies to identify families with children for enrollment in the lead grant programs. Additionally, HUD asked applicants to identify partners that can help provide assistance to complete the lead hazard control work for high-cost housing units. Furthermore, HUD required applicants to identify any nonfederal funding, including funding from the applicants’ partners. Appendix I includes examples of state, local, and nongovernmental funds that selected grantees planned to use to supplement their lead grants. HUD Has Taken Actions Consistent with OMB Requirements but Has Not Fully Documented or Evaluated Its Lead Grant Programs’ Processes In its lead grant programs, HUD has taken actions that were consistent with OMB’s requirements for competitively awarded grants. OMB generally requires federal agencies to: (1) establish a merit-review process for competitive grants that includes the criteria and process to evaluate applications; and (2) develop a framework to assess the risks posed by applicants for competitive grants, among other things. Through a merit-review process, an agency establishes and applies criteria to evaluate the merit of competitive grant applications. Such a process helps to ensure that the agency reviews grant applications in a fair, competitive, and transparent manner. Consistent with the OMB requirement to establish a merit review process, HUD has issued annual funding notices that communicate clear and explicit evaluative criteria. In addition, HUD has established processes for reviewing and scoring grant applications using these evaluative criteria, and selects grant recipients based on the review scores (see fig. 2). For example, applicants that score at or above 75 points are qualified to receive awards from HUD. Also, HUD awards funds beginning with the highest scoring applicant and proceeds by awarding funds to applicants in a descending order until funds are exhausted. Furthermore, consistent with the OMB requirement to develop a framework to assess applicant risks, HUD has developed a framework to assess the risk posed by lead grant applicants by, among other things, deeming ineligible those applicants with past performance deficiencies or those that do not have a financial management system that meets federal standards. However, HUD has not fully documented or evaluated its lead grant processes in reviewing and scoring the grants and making award decisions: Documenting grant processes and award decisions. While HUD has established processes for its lead grant programs, it lacks documentation, including detailed guidance to help ensure that staff carry out processes consistently and appropriately. Federal internal control standards state that agency management should develop and maintain documentation of its internal control system. Such documentation assists agency management by establishing and communicating the processes to staff. Additionally, documentation of processes can provide a means to retain organizational knowledge and communicate that knowledge as needed to external parties. The Lead Office’s Application Review Guide describes its grant application review and award processes at a high level but does not provide detailed guidance for staff as to how tasks should be performed. For example, the Guide notes that reviewers score eligible applications according to factors contained in the funding notices but does not describe how the reviewers should allocate points to the subfactors that make up each factor. Lead Office staff told us that creating detailed scoring guidance would be challenging because applicants’ proposed grant activities differ widely, and they said that scoring grant applications is a subjective process. While scoring grant applications may involve subjective judgments, improved documentation of grant review and scoring processes, including additional direction to staff, can help staff apply their professional judgment more consistently in evaluating applications. By better documenting processes, HUD can better ensure that staff evaluate applications consistently. Additionally, HUD has not fully documented its rationale for deciding which applicants receive lead grant awards and for deciding the dollar amounts of grant awards to successful applicants. In prior work examining federal grant programs, one recommended practice we identified is that agencies should document the rationale for award decisions, including the reasons individual applicants were selected or not and how award funding amounts were determined. While HUD’s internal memorandums listed the applicants selected and the award amounts, these memorandums did not document the rationale for these decisions or provide information sufficient to help applicants understand award outcomes. Lead Office staff told us that most grantees have received the amount of funding they requested in their applications, which was generally based on HUD’s maximum grant award amount. Lead Office staff said they could use their professional judgment to adjust award amounts to extend funding to more applicants when applicants received similar scores. However, the Lead Office’s documentation we reviewed did not explain this type of decision making. For example, in 2017, when two applicants received identical scores on their applications, HUD awarded each applicant 50 percent of the remaining available funds rather than awarding either applicant the amount they requested. Representatives of one of the two grantees told us they did not know why the Lead Office had not provided them the full amount they had requested. Lead Office staff told us that, to date, HUD has not considered alternative ways to award grant funding amounts. By fully documenting grant award processes, including the rationale for award decisions and amounts, HUD could provide greater transparency to grant applicants about its grant award decisions. Evaluating processes. HUD lacks a formal process for reviewing and updating its lead grant funding notices, including the factors and point allocations used to score applications. Federal internal control standards state that agencies should implement control activities through policies and that periodic review of policies and procedures can provide assurance of their effectiveness in achieving the agency’s objectives. Lead Office staff told us that previous changes to the factors and point allocation used to score applicants have been made based on informal discussions among staff. However, the Lead Office does not have a formal process to review and evaluate the relevance and appropriateness of the factors or points used to score applicants. Lead Office staff told us that they have never analyzed the scores applicants received for the factors to identify areas where applicants may be performing well or poorly or to help inform decisions about whether changes may be needed to the factors or points. Additionally, HUD has not changed the threshold criteria used to make award decisions since the threshold was established in 2003. As previously shown in figure 2, applicants who received at least 75 points (out of 100) have been qualified to receive a grant award. However, HUD grant documentation, including the funding notices and the Application Review Guide, does not explain the significance of this 75-point threshold. Lead Office staff stated that this threshold was first established in 2003 by HUD based on OMB guidance. A formal review of this 75-point threshold can help HUD determine whether it remains appropriate for achieving the grant programs’ objectives. Furthermore, by periodically evaluating processes for reviewing and scoring grant applications, HUD can better determine whether these processes continue to help ensure that lead grants reach areas of the country at greater risk for lead paint hazards. HUD Has Begun to Develop Analyses to Help More Fully Identify Areas at Risk for Lead Paint Hazards but Has Not Set Time Frames for Using Local-Level Data HUD has begun to develop analyses and tools to inform its efforts to target outreach and ensure that grant awards go to areas of the country that are at risk for lead paint hazards. However, HUD has not developed time frames for incorporating the results of the analyses into its lead grant programs’ processes. HUD has required jurisdictions applying for lead grants to include data on the need or extent of the problem in their jurisdiction (i.e., scoring factor 2). Additionally, Lead Office staff told us that HUD uses information from the American Healthy Homes Survey to obtain information on lead paint hazards across the country. However, the staff explained that the survey was designed to provide meaningful results at the regional level and did not include enough homes in its sample to provide information about housing conditions, such as lead paint hazards, at the state or local level. Because HUD awards lead grants to state and local jurisdictions, it cannot effectively use the survey results to help the agency make award decisions or inform decisions about areas for potential outreach. In early 2017, the Lead Office began working with PD&R to develop a model to identify local jurisdictions (at the census-tract level) that may be at heightened risk for lead paint hazards. Lead Office staff said that they hope to use results of this model to develop geographic tools to help target HUD funding to areas of the country at risk for lead paint hazards but not currently receiving a HUD lead grant. Lead Office staff said that they could reach out to these at-risk areas, help them build the capacity needed to administer a grant, and encourage them to apply. For example, HUD has identified that Mississippi and two major metropolitan areas in Florida (Miami and Tampa) had not applied for a lead grant. HUD has conducted outreach to these areas to encourage them to apply for a lead grant. In 2016, the City of Jackson, Mississippi, applied for and received a lead grant. Though the Lead Office has collaborated with PD&R on the model, HUD has not developed specific time frames to operationalize the model and incorporate the results of the model for using local-level data to help better identify areas at risk for lead paint hazards. Federal internal control standards require agencies to define objectives clearly to enable the identification of risks. This includes clearly defining time frames for achieving the objectives. Setting specific time frames could help to ensure that HUD operationalizes this model in a timely manner. By operationalizing a model that incorporates local data on lead paint hazard risk, HUD can better target its limited grant resources towards areas of the country with significant potential for lead hazard control needs. We performed a county-level analysis using HUD and Census Bureau data and found that most lead grants from 2013 through 2017 have gone to counties with at least one indicator of lead paint hazard risk. Information we reviewed, such as relevant literature, suggests that the two common indicators of lead paint hazard risk are the prevalence of housing built before the 1978 lead paint ban and the prevalence of individuals living below the poverty line. We defined areas with lead paint hazard risk as counties that had percentages higher than the corresponding national percentages for both of these indicators. The estimated average percentage nationwide of total U.S. housing stock constructed before 1980 was 56.9 percent and the estimated average percentage nationwide of individuals living below the poverty line was 17.5 percent. As shown in figure 3, our analysis estimated that 18 percent of lead grants from 2013 through 2017 have gone to counties with both indicators above the estimated national percentages, 59 percent of grants have gone to counties with estimated percentages of old housing above the estimated national percentage, and 7 percent of grants have gone to counties that had estimated poverty rates above the estimated national percentage. (For an interactive version of this map, click here.) When HUD finalizes its model and incorporates information into its lead grant processes, HUD will be able to better target its grant resources to areas that may be at heightened risk for lead paint hazards. HUD Could Take Additional Steps to Monitor Compliance with Lead Paint Regulations HUD Has Taken Steps to Strengthen Compliance Monitoring for Lead Paint Regulations In 2016, HUD began to incorporate new steps to monitor PHAs’ compliance with lead paint regulations for nearly 4,000 PHAs. Previously, according to PIH staff, HUD required only that PHAs annually self-certify their compliance with lead paint laws and regulations, and HUD’s Real Estate Assessment Center inspectors check for lead paint inspection reports and disclosure forms at public housing properties during physical inspections. Starting in June 2016, PIH began using new tools for HUD field staff to track PHAs’ compliance with lead paint requirements in the voucher and public housing programs. As shown in figure 4, PIH’s compliance oversight processes for the voucher and public housing programs include various monitoring tools for overseeing PHAs. Key components of PIH’s lead paint oversight processes include the following: Tools for tracking lead hazards and cases of elevated blood levels in children. HUD uses two databases to monitor PHAs’ compliance with lead paint regulations: (1) the Lead-Based Paint Response Tracker, which PIH uses to collect and monitor information on the status of lead paint-related documents, including lead inspection reports and disclosure forms, in public housing properties but not in units with voucher assisted households; and (2) the Elevated Blood Lead Level Tracker, which PIH uses to collect and monitor information reported by PHAs on cases of elevated blood levels in children living in voucher and public housing units. In June 2016, OFO began using the Lead-Based Paint Response Tracker database to store information on public housing units and to help HUD field office staff to follow up with PHAs that have properties missing required lead documentation. In July 2017, OFO began using information recorded in the Elevated Blood Lead Level Tracker to track whether PHAs started lead remediation activities in HUD- assisted housing within the time frames required by the Lead Safe Housing Rule. Lead paint hazards included in PHAs’ risk assessment scores. OFO assigns scores to PHAs based on their relative risk in four categories: physical condition, financial condition, management capacity, and governance. OFO uses these scores to identify high- and very high-risk PHAs that will receive on-site full compliance reviews. In July 2017, OFO incorporated data from the Real Estate Assessment Center into the physical condition category of its Risk Assessment Protocol to help account for potential lead paint hazards at public housing properties. Questions about lead paint included as part of on-site full compliance reviews. In fiscal year 2016, HUD field offices began conducting on-site full compliance reviews at high- and very high-risk PHAs as part of HUD’s compliance monitoring program to enhance oversight and accountability of PHAs. In fiscal year 2017, as part of the reviews, HUD field office staff started using a compliance monitoring checklist to determine if PHAs comply with major HUD rules and to gather additional information on the PHAs. This checklist included lead-related questions that PIH field office staff use to determine whether PHAs meet the requirements in lead paint regulations for both the voucher and public housing programs. In 2016, OFO and HUD field offices began using information from the new monitoring efforts to identify potential noncompliance by PHAs with lead paint regulations and help the PHAs resolve the identified issues. According to HUD data, as of November 2017, the Lead-Based Paint Response Tracker indicated that 9 percent (357) of PHAs were missing both lead inspection reports and lead disclosure forms for one or more properties. There were 973 PHAs missing one of the two required documents. OFO staff told us that they prioritized following up with PHAs that were missing both documents. According to OFO staff, PHAs can resolve potential noncompliance by submitting adequate lead documentation to HUD. OFO staff told us the agency considers missing lead documentation as “potential” noncompliance because PHAs may provide the required documentation or they may be exempt from certain requirements (e.g., HUD-designated elderly housing). HUD Does Not Have a Plan to Mitigate Risks Associated with Its Compliance Monitoring Approach While HUD has taken steps to strengthen compliance monitoring processes, it does not have a plan to identify and address the risks of noncompliance by PHAs with lead paint regulations. Federal internal control standards state that agencies should identify, analyze, and respond to risks related to achieving the defined objectives. Furthermore, when an agency has made significant changes to its processes—as HUD has done with its compliance monitoring processes—management review of changes to these processes can help the agency determine that its control activities are designed appropriately. Our review found that HUD does not have a plan to help mitigate and address risks related to noncompliance with lead paint regulations by PHAs (i.e., ensuring lead safety in assisted housing). Additionally, our review found several limitations with HUD’s new compliance monitoring approach, which include the following: Reliance on PHA self-certifications. HUD’s compliance monitoring processes rely in part on PHAs self-certifying that they are in compliance with lead paint regulations, but recent investigations have found that some PHAs may have falsely certified that they were in compliance. In November 2017, HUD filed a fraud complaint against two former officials of the Alexander County (Illinois) Housing Authority, alleging that the former official, among other things, falsely certified to HUD that the Housing Authority was in compliance with lead paint regulations. Further, PIH staff told us there are ongoing investigations related to potential noncompliance with lead paint regulations and false certifications at two other housing authorities. Lack of comprehensive data for the public housing program. OFO started to collect data for the public housing program in the Lead-Based Paint Response Tracker in June 2016 and the inventory of all public housing properties includes units inspected since 2012. In addition, HUD primarily relies on the presence of lead inspection reports but does not record in the database when inspections and remediation activities occurred and does not determine whether they are still effective. Because of this, the information contained in the lead inspection reports may no longer be up-to-date. For example, a lead inspection report from the 1990s may provide evidence that abatement work was conducted at that time, but according to PIH staff, the housing may no longer be lead-safe. Lack of readily available data for the voucher program. The voucher program does not have readily available data on housing units’ physical condition and compliance with lead paint regulations because data on the roughly 2.5 million units in the program are kept at the PHA level. According to PIH staff, HUD plans to adopt a new system for the voucher program that will include standardized, electronic data for voucher units. PIH staff said the new system (Uniform Physical Condition Standards for Vouchers Protocol) will allow greater oversight and provide HUD the ability to conduct data analysis for voucher units. Challenges identifying children with elevated blood lead levels. For several reasons, PHAs face ongoing challenges receiving information from state and local public health departments on the number of children identified with elevated blood lead levels. First, children across the U.S. are not consistently screened and tested for exposure to lead. Second, according to CDC data, many states use a less stringent health guideline to identify children compared to the health standard that HUD uses (i.e., CDC’s current blood lead reference value). PIH staff told us that some public health departments may not report children with elevated blood levels to PHAs because they do not know that a child is living in a HUD- assisted unit and needs to be identified using the more stringent HUD standard. Lastly, Lead Office staff told us that privacy laws in some states may impose restrictions on public health departments’ ability to share information with PHAs. Limited coverage of on-site compliance reviews. While full on-site compliance reviews can be used to determine if PHAs are in compliance with lead paint regulations, OFO conducts a limited number of these reviews annually. For example, in Fiscal Year 2017, OFO conducted 72 reviews of the roughly 4,000 total PHAs. Based on OFO information, there are 973 PHAs that are missing either lead inspection reports or lead disclosure forms indicating some level of potential noncompliance. HUD’s steps since June 2016 to enhance monitoring of PHAs’ compliance with lead paint regulations have some limitations that create risks in its new compliance monitoring approach. By developing a plan to help mitigate and address the various limitations associated with the new compliance monitoring approach, HUD could further strengthen its oversight and help ensure that PHAs maintain lead-safe housing units. HUD Lacks Detailed Procedures to Address Noncompliance and Make Enforcement Decisions HUD does not have detailed procedures to address PHA noncompliance with lead paint regulations or to determine when enforcement decisions may be needed. Lead Office staff told us that their enforcement program aims to ensure that PHAs have the information necessary to remain in compliance with lead paint regulations. According to federal internal control standards, agencies should implement control activities through policies and procedures. Effective design of procedures to address noncompliance would include documenting specific actions to be performed by agency staff when deficiencies are identified and related time frames for these actions. While HUD staff stated that they address PHA noncompliance through ongoing communication and technical assistance to PHAs, HUD has not documented specific actions to be performed by staff when deficiencies are identified. OFO staff told us that in general, PIH has not needed to take many enforcement actions because field offices are able to resolve most lead paint regulation compliance concerns with PHAs through ongoing communication and technical assistance. For example, HUD field offices sent letters to PHAs when Real Estate Assessment Center inspectors could not locate required lead inspection reports and lead disclosure forms, and requested that the PHA send the missing documentation within 30 days. However, OFO’s fiscal years 2015–2017 internal memorandums on monitoring and oversight guidance for HUD field offices did not contain detailed procedures, including time frames or criteria HUD staff would use to determine when to consider whether a more formal enforcement action might be warranted. Additionally, Lead Office staff said if efforts to bring a PHA into compliance are unsuccessful, the Lead Office would work in conjunction with PIH and HUD’s Office of General Counsel’s Departmental Enforcement Center to determine if an enforcement action is needed, such as withholding or delaying funds from a PHA or imposing civil money penalties on a PHA. Lead Office staff also told us that instead of imposing a fine on a PHA, HUD would rather work with the PHA to resolve the lead paint hazard. However, the Lead Office provided no documentation detailing the specific steps or time frames HUD staff would follow to determine when a noncompliance case is escalated to the Office of General Counsel. In a March 2018 report to Congress, HUD noted that children continued to test positive for lead in HUD-assisted housing in 2017. In the same report, HUD notes PIH and the Lead Office will continue to work with PHAs to ensure compliance with lead paint regulations. By adopting procedures that clearly describe when lead paint hazard compliance efforts are no longer sufficient and enforcement decisions are needed, HUD can better keep PHAs accountable in a consistent and timely manner. HUD’s Blood Lead Level Standard Aligns with CDC Guidelines and Lead Inspection Standards Are Less Stringent in the Voucher Program HUD’s Blood Lead Level Standard Aligns with the Current CDC Health Guideline The standard HUD uses to identify children with elevated blood lead levels and initiate lead hazard control activities in its rental assistance aligns with the health guideline set by CDC in 2012. HUD also uses CDC’s health guideline in its lead grant programs. In HUD’s January 2017 amendment to the Lead Safe Housing Rule, HUD made its standard for lead in a child’s blood more stringent by lowering it from 20 micrograms to 5 micrograms of lead per deciliter of blood, matching CDC’s health guideline (i.e., blood lead reference value). Specifically, HUD’s stronger standard allows the agency to respond more quickly when children under 6 years old are exposed to lead paint hazards in voucher and public housing units. The January 2017 rule also established more comprehensive testing for children and evaluation procedures for HUD assisted housing. According to HUD’s press release that accompanied the rule, by aligning HUD’s standard with CDC’s guidance, HUD can respond more quickly in cases when a child who lives in HUD assisted housing shows early signs of lead in their blood. The 2017 rule notes HUD will revise the agency’s elevated blood lead level to align with future changes HHS may make to its recommended environmental intervention level. HUD’s Lead Dust Standards Align with EPA’s for Rental Assistance Programs and Exceed Them for Lead Grant Programs HUD’s standards for lead dust levels align with EPA standards for its rental assistance programs and exceed EPA standards for the lead grant programs. In 2001, EPA published a final rule on lead paint hazard standards, including lead dust clearance standards. The rule established standards to help property owners, contractors, and government agencies identify lead hazards in residential paint, dust, and soil and address these hazards in and around homes. Under these standards, lead is considered a hazard when equal to or exceeding 40 micrograms of lead in dust per square foot sampled on floors and 250 micrograms of lead in dust per square foot sampled on interior window sills. In 2004, HUD amended the Lead Safe Housing Rule to incorporate the 2001 EPA lead dust standards as HUD’s standards. Since this time, HUD has used EPA’s 2001 lead hazard standards in its rental assistance programs. In February 2017, HUD released policy guidance for its lead grantees requiring them to meet new and more protective requirements for identifying and addressing lead paint hazards in the lead grant programs than those imposed by EPA’s 2001 standards that HUD uses in the rental assistance programs. For example, the policy guidance requires grantees to consider lead dust a hazard on floors at 10 micrograms per square foot sampled (down from 40) and on window sills at 100 micrograms per square foot sampled (down from 250). The policy guidance noted that the new requirements are supported by scientific evidence on the adverse effects of lead exposure at low blood lead levels in children. Further, the policy guidance established a standard for porch floors––an area that EPA has not covered––because porch floors can be both a direct exposure source for children and a source of lead dust that can be tracked into the home. On December 27, 2017, the United States Court of Appeals for the Ninth Circuit ordered EPA to issue a proposed rule updating its lead dust hazard standard and the definition of lead-based paint within 90 days of the decision becoming final and a final rule within 1 year of the proposed rule. Because HUD’s Lead Safe Housing Rule generally defines lead paint hazards and lead dust hazards to mean the levels promulgated by EPA, if EPA changes its 2001 standards those new standards would be used in HUD’s rental assistance programs. On March 16, 2018, EPA filed a request to the court asking for clarification for when EPA is required to issue the proposed rule and followed up with a motion seeking clarification or an extension. In response to EPA’s motion, on March 26, 2018, the court issued an order clarifying time frames and ordered that the proposed rule be issued within 90 days from March 26, 2018. HUD Uses a Less Stringent Lead Inspection Standard for the Voucher Program HUD’s Lead Safe Housing Rule requires a stricter lead inspection standard for public housing than for voucher units. According to HUD staff, HUD does not have the authority to require the more stringent inspection in the voucher program. While HUD has acknowledged that moving to a stricter inspection standard for voucher units would provide greater assurance that these units are lead-safe and expressed its plan to support legislative change to authorize it to impose a more stringent inspection standard, HUD has not requested authority from Congress to amend its inspection standard for the voucher program. For voucher units, HUD requires PHAs to ensure that trained inspectors conduct visual assessments to identify deteriorated paint for housing units inhabited by a child under 6 years old. In a visual assessment, an inspector looks for deteriorated paint and visible surface dust but does not conduct any testing of paint chips or dust samples from surfaces to determine the presence of lead in the home’s paint. By contrast, for public housing units, HUD requires a stronger inspection process. Lead- based paint inspections are required for pre-1978 public housing units. If that inspection identifies lead-based paint, PHAs must then perform a risk assessment. In a risk assessment, in addition to conducting a visual inspection, an inspector tests for the presence of lead paint by collecting and testing samples of paint chips and surface dust, and typically using a specialized device (an X-ray fluorescence analyzer) to measure the amount of lead in the paint on a surface, such as a wall, door, or window sill. Staff from HUD’s Lead Office and the Office of General Counsel told us that Title X did not include specific risk assessment requirements for voucher units, and HUD does not believe, therefore, that it has the statutory authority to require an assessment more thorough than a visual assessment of voucher units. As of May 2018, HUD had not requested statutory authority to change the visual assessment standard used in the voucher program. However, HUD previously acknowledged the limitation of the weaker inspection standard in a June 2016 publication titled Lead- Safe Homes, Lead-Free Kids Toolkit. In this publication, HUD noted its plans to support legislative change to strengthen lead safety in voucher units by eliminating reliance on visual-only inspections. Staff from HUD’s Lead Office and Office of General Counsel told us the agency recognizes that risk assessments are more comprehensive than visual assessments. The staff noted that, by definition, a risk assessment is a stronger inspection standard than a visual-only assessment because it includes additional identification and testing. In responding to a draft of this report, HUD cited the need to conduct and evaluate the results of a statistically rigorous study on the impacts of requiring a lead risk assessment versus a visual assessment, such as the impact on leasing times and the availability of housing for low-income families. HUD further noted that such a study could explore whether alternative options to the full risk assessment standard (such as targeted dust sampling) could achieve similar levels of protection for children in the voucher program. Requesting and obtaining authority to amend the standard for the voucher program would not preclude HUD from doing such a study. Such analysis might support a range of options based on consideration of health effects for children, housing availability, and other relevant factors. Because HUD’s Lead Safe Housing Rule contains a weaker lead inspection standard for the voucher program children living in voucher units may be less protected from lead paint hazards than children living in public housing. By requesting and obtaining statutory authority to amend the voucher program inspection standard, HUD would be positioned to take steps to ensure that children in the voucher program are provided better protection as indicated by analysis of the benefits and costs from amending the standard. HUD Could Better Measure and Report on Performance of Lead Efforts HUD has taken limited steps to measure, evaluate, and report on the performance of its programmatic efforts to ensure that housing is lead- safe. First, HUD has tracked one performance measure for its lead grant programs but lacks comprehensive performance goals and measures. Second, while HUD has evaluated the effectiveness of its Lead-Based Paint Hazard Control grant program, it has not formalized plans and does not have a time frame for evaluating its lead paint regulations. Third, HUD has not issued an annual report on the results of its lead efforts since 1997. A key aspect to promoting improved federal management and greater efficiency and effectiveness is that agencies set goals and report on performance. We have previously reported that a program performance assessment contains three key elements––program goals, performance measures, and program evaluations (see fig. 5). In our prior work, we have noted that both the executive branch and congressional committees need evaluative information to help them make decisions about the programs they oversee––information that tells them whether, and why, a program is working well or not. Program goals and performance measures. HUD has tracked one performance measure for making private housing units lead-safe as part of its lead grant programs but lacks goals and performance measures that more fully cover the range of its lead efforts. In addition to our prior work on program goals and performance measures, federal internal control standards state that management should define objectives clearly and that defining objectives in measurable terms allows agency management to assess performance toward achieving objectives. According to Lead Office staff, HUD provides information on its goals and performance measures related to its lead efforts in the agency’s annual performance reports. For example, the fiscal year 2016 report contains information about the number of private housing units made lead-safe as part of HUD’s lead grant programs but does not include any performance measures on HUD’s lead efforts for the voucher and public housing programs. Lead Office staff told us HUD does not have systems to count the number of housing units made lead-safe in these two housing programs. The staff said the Lead Office and PIH recently began discussing whether data from an existing HUD database could be used to count units made lead-safe within these programs. However, they could not provide additional details on the status of all these efforts. Without comprehensive goals and performance measures, HUD does not know the results it is achieving with all its lead paint hazard reduction efforts. Moreover, HUD may be missing opportunities to use performance information to improve the results of its lead efforts. Program evaluations. HUD has evaluated the effectiveness of its Lead- Based Paint Hazard Control grant program but has not taken similar steps to evaluate the Lead Safe Housing Rule or Lead Disclosure Rule. As previously stated, our prior work on program performance assessment has noted the importance of program evaluations to know how well a program is working relative to its objectives. Additionally, Title X required HUD to conduct research to evaluate the long-term cost-effectiveness of interim lead hazard control and abatement strategies. For its Lead-Based Paint Hazard Control Grant program, HUD has contracted with outside experts to conduct evaluations. For example, the National Center for Healthy Housing and the University of Cincinnati’s Department of Environmental Health evaluated whether the lead hazard control methods used by grantees continued to be effective 1, 3, 6, and 12 years later. The evaluations concluded that the lead hazard control activities used by grantees substantially reduced lead dust levels and the original evaluation and those completed 1 and 3 years later were also associated with substantial declines in the blood lead levels of children living in the housing remediated using lead grant program funds. HUD has general plans to conduct evaluations of the Lead Safe Housing Rule and the Lead Disclosure Rule, but Lead Office and PD&R staff said they did not know when or if the studies will begin. In a 2016 publication, HUD noted its plans to evaluate the Lead Safe Housing Rule requirements and noted that such an evaluation would contribute toward policy recommendations and program improvements. Additionally, in its 2017 Research Roadmap, PD&R outlined HUD’s plans for two studies to evaluate the effectiveness of requirements within the Lead Safe Housing and Lead Disclosure Rules. However, PD&R and Lead Office staff were not able to provide a time frame for when the studies would begin. PD&R staff told us that the plans noted within the Research Roadmap were HUD’s first step in research planning and prioritization but that appropriations for research have been prescriptive in recent years (i.e., tied to specific research topics) and fell short of the agency’s research needs. By studying the effectiveness of requirements included within the Lead Safe Housing and Lead Disclosure Rules, including the cost- effectiveness of the various lead hazard control methods, HUD could have more complete information to assess how effectively it uses federal dollars to make housing units lead-safe. Reporting. HUD has not reported on its lead efforts as required since 1997. Title X includes annual and biennial reporting requirements for HUD. Staff from HUD’s Lead Office and General Counsel told us that in 1998 the agency agreed with the congressional committees of jurisdiction that HUD could satisfy this reporting requirement by including the required information in its annual performance reports. Lead Office staff told us HUD’s recent annual performance reports do not contain specific information required by law and that HUD has not issued other publicly available reports that contain the Title X reporting requirements. Title X requires HUD to annually provide Congress information on its progress in implementing the lead grant programs; a summary of studies looking at the incidence of lead poisoning in children living in HUD-assisted housing; the results of any required lead technical studies; and estimates of federal funds spent on lead hazard evaluation and reduction in HUD-assisted housing. As previously stated, the annual performance reports have provided information on the number of housing units made lead-safe through the agency’s lead grant programs, but not through the voucher or public housing programs. In March 2018, Lead Office staff told us HUD plans to submit separate reports on the agency’s lead effort, covering the Title X reporting requirements, starting in fiscal year 2019. By HUD complying with Title X statutory reporting requirements, Congress and the public will be in a position to better know the progress HUD is making toward ensuring that housing is lead-safe. Conclusions Lead exposure can cause serious, irreversible cognitive damage that can impair a child for life. Through its lead grant programs and oversight of lead paint regulations, HUD is helping to address lead paint hazards in housing. However, our review identified specific areas where HUD could improve the effectiveness of its efforts to identify and address lead paint hazards and protect children in low-income housing from lifelong health problems: Documenting and evaluating grant processes. HUD could improve documentation for its lead grant programs’ processes by providing more specific direction to staff and documenting grant award rationale. In doing so, HUD could better ensure that grant program staff score grant applications consistently and appropriately and provide greater transparency about its award decisions. Additionally, periodically evaluating its grant processes and procedures could help HUD better ensure that its lead grants reach areas most at risk for lead paint hazards. Identifying areas at risk for lead hazards. By developing specific time frames to finalize and incorporate the results of its model to more fully identify areas at risk for lead paint hazards, HUD can better identify and conduct outreach to at-risk localities that its lead grant programs have not yet reached. Overseeing compliance with lead paint regulations. False self- certifications of compliance by some PHAs and other limitations in HUD’s compliance monitoring approach make it essential for HUD to develop a plan to mitigate and address limitations, as well as establish procedures to determine when enforcement decisions are needed. These actions could further strengthen HUD’s oversight and keep PHAs accountable for ensuring that housing units are lead-safe. Amending inspection standard in the voucher program. Children living in voucher units may receive less protection from lead paint hazards than children living in public housing units because HUD applies different lead inspection standards to the two programs. HUD could ensure that children in the voucher program are provided better protection from lead by requesting and obtaining statutory authority to amend the voucher program inspection standard as indicated by analysis of the benefits and costs of amending the standard. Assessing and reporting on performance. Fully incorporating key elements of performance assessment—by developing comprehensive goals, improving performance measures, and adhering to reporting requirements—could better enable HUD to assess its own progress and target its resources toward lead efforts that maximize impact. Additionally, HUD may be missing opportunities to inform the Congress and the public about how HUD’s lead efforts have helped reduce lead poisoning in children. Recommendations for Executive Action We are making the following nine recommendations to HUD: The Director of HUD’s Lead Office should ensure that the office more fully documents its processes for scoring and awarding lead grants and its rationale for award decisions. (Recommendation 1) The Director of HUD’s Lead Office should ensure that the office periodically evaluates its processes for scoring and awarding lead grants. (Recommendation 2) The Director of HUD’s Lead Office, in collaboration with PD&R, should set time frames for incorporating relevant data on lead paint hazard risks into the lead grant programs’ processes. (Recommendation 3) The Director of HUD’s Lead Office and the Assistant Secretary for PIH should collaborate to establish a plan to mitigate and address risks within HUD’s lead paint compliance monitoring processes. (Recommendation 4) The Director of HUD’s Lead Office and the Assistant Secretary for PIH should collaborate to develop and document procedures to ensure that HUD staff take consistent and timely steps to address issues of PHA noncompliance with lead paint regulations. (Recommendation 5) The Secretary of HUD should request authority from Congress to amend the inspection standard to identify lead paint hazards in the Housing Choice Voucher program as indicated by analysis of health effects for children, the impact on landlord participation in the program, and other relevant factors. (Recommendation 6) The Director of the Lead Office should develop performance goals and measures to cover the full range of HUD’s lead efforts, including its efforts to ensure that housing units in its rental assistance programs are lead-safe. (Recommendation 7) The Director of the Lead Office, in conjunction with PD&R, should finalize plans and develop a time frame for evaluating the effectiveness of the Lead Safe Housing and Lead Disclosure Rules, including an evaluation of the long-term cost effectiveness of the lead remediation methods required by the Lead Safe Housing Rule. (Recommendation 8) The Director of the Lead Office should complete statutory reporting requirements, including but not limited to its efforts to make housing lead-safe through its lead grant programs and rental-assistance programs, and make the report publicly available. (Recommendation 9) Agency Comments and Our Evaluation We provided a draft of this report to HUD for review and comment. We also provided the relevant excerpts of the draft report to CDC and EPA for their review and technical comments. In written comments, reproduced in appendix III, HUD disagreed with one of our recommendations and generally agreed with the remaining eight. HUD and CDC also provided technical comments, which we incorporated as appropriate. EPA did not have any comments on the relevant excerpts of the draft report provided to them. In its general comments, HUD noted that the lead grant programs and HUD’s compliance assistance and enforcement of lead paint regulations have contributed significantly to, among other things, the low prevalence of lead-based paint hazards in HUD-assisted housing. Further, HUD said the lead grant programs and compliance assistance and enforcement of lead paint regulations have played a critical part in developing and maintaining the national lead-based paint safety infrastructure. HUD asked that this contextual information be included in the background of the report. The draft report included detailed information on the purpose and scope of HUD’s lead grant programs, two key regulations related to lead paint hazards, and efforts to make housing lead-safe. Furthermore, the draft report provided context on other federal agencies’ role in establishing relevant standards and guidelines for lead paint hazards. We made no changes in response to this comment because we did not think it was necessary for background purposes. HUD disagreed with the draft report’s sixth recommendation to request authority from Congress to use the risk assessment inspection standard to identify lead paint hazards in the Housing Choice Voucher program. As discussed in the report, HUD’s Lead Safe Housing Rule requires a more stringent lead inspection standard (risk assessments) for public housing than for Housing Choice Voucher units, for which a weaker inspection standard is used (visual assessments). In its written comments, HUD said that before deciding whether to request the statutory authority to implement risk assessments for voucher units, it would need to conduct and evaluate the results of a statistically rigorous study on the impacts of requiring a lead risk assessment versus a visual assessment, such as the impact on leasing times and the availability of housing for low-income families. HUD further noted that such a study could explore whether alternative options to the full risk assessment standard (such as targeted dust sampling) could achieve similar levels of protection for children in the voucher program. We note that requesting and obtaining authority to amend the standard for the Housing Choice Voucher program would not preclude HUD from doing such a study. We acknowledge that the results of such a study might support a range of options. Therefore, we revised our recommendation to provide HUD with greater flexibility in how it might amend the lead inspection standard for the voucher program based on consideration of not only leasing time and availability of housing, as HUD emphasized in its written comments, but also based on the health effects on children. The need for HUD to review the lead inspection standard for the voucher program is underscored by the greater number of households with children served by the voucher program compared to public housing, as well as recent information indicating that more children with elevated blood lead levels are living in voucher units than in public housing. HUD generally agreed with our remaining eight recommendations and provided specific information about planned steps and other considerations related to implementing them. For example, in response to our first three recommendations on the lead grant programs, HUD outlined specific steps it plans to take, such as updating its guidance for scoring grant applications and reviewing its grant application scoring methods to identify potential improvements. In response to our fourth and fifth recommendations to the Director of HUD’s Lead Office on compliance monitoring and enforcement of lead paint regulations, HUD noted that PIH should be the primary office for these recommendations with the Lead Office providing support. While these recommendations had already recognized the need for the Lead Office to collaborate with PIH, we reworded them to clarify that it is not necessary for the Lead Office to have primary responsibility for their implementation. HUD generally agreed with our seventh and eighth recommendations, but noted some considerations for implementing them. For our seventh recommendation about performance goals and measures, HUD noted that it will re-examine the availability of information from the current housing databases to determine whether data on housing unit production can be added to the existing data collected. HUD noted if that information is not sufficient, it would need to obtain Office of Management and Budget approval and have sufficient funds for such an information technology project. For our eighth recommendation about evaluating the Lead Safe Housing and Lead Disclosure Rules, HUD noted if its own resources are insufficient, the time frame for implementing this recommendation may depend on the availability of funding for contracted resources. Finally, in response to our ninth recommendation, HUD said that it will draft and submit annual and biennial reports to the congressional authorizing and appropriations committees and then post the reports on the Lead Office’s public website. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Department of Housing and Urban Development, the Administrator of the Environmental Protection Agency, and the Secretary of Health and Human Services, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or garciadiazd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Nonfederal Funding Sources Used by Selected Grantees of HUD Lead Hazard Control Grants Under the Department of Housing and Urban Development’s (HUD) Lead-Based Paint Hazard Control and the Lead Hazard Reduction Demonstration grant programs, HUD competitively awards grants to state and local jurisdictions, as authorized by the Residential Lead-Based Paint Hazard Reduction Act (Title X of the Housing and Community Development Act of 1992). Title X requires each grant recipient to make matching contributions with state, local, and private funds (i.e., nonfederal) toward the total cost of activities. For the Lead-Based Paint Hazard Control grant and the Lead Hazard Reduction Demonstration grant programs, the matching contribution has been set at no less than 10 percent and 25 percent, respectively, of the total grant amount. For example, if the total grant amount is $3 million, then state or local jurisdictions must provide at least $300,000 and $750,000, respectively, for each grant program, in additional funding toward the cost of activities. HUD requires lead grant applicants to include information on the sources and amounts of grantees’ matching contributions as part of their grant applications. Additionally, Title X requires HUD to award grants in part based on an applicant’s ability to leverage state, local, and private funds to supplement the federal grant funds. To identify the nonfederal funding sources grantees used in the lead hazard control grants, we selected and reviewed the lead grant applications of 20 HUD grantees and interviewed representatives from 10 of these. We selected these grantees based on their geographic locations; the number of HUD lead grants they had previously received; experience with HUD’s lead hazard control grants; and whether they have received both grants from 2013 through 2017. Grantees we selected included entities at the state, municipality, and county levels. Information from our grant application reviews and interviews of grantees cannot be generalized to all HUD grantees. Based on our review of the selected grant applications and interviews of selected grantees, we found that grantees planned to use the following types of nonfederal funding sources as their matching contributions to support their lead grants activities: State and local funds. Eighteen of the 20 grantees we selected noted that they planned to use state or local funding sources to supplement HUD’s grant funds. The state and local funding sources included state or local general funds and local property taxes or fees. For example, grantees in Connecticut, Baltimore, and Philadelphia used state or local general funds to cover personnel and operating costs. Additionally, grantees in Alameda County (California), Hennepin County (Minnesota), Malden, St. Louis, and Winnebago County (Illinois) planned to use local taxes, including property taxes or fees, such as real estate recording and building permit fees, to cover some costs associated with their lead hazard control grants activities. Community Development Block Grant funds. Ten of the 20 grantees we selected indicated that they planned to use Community Development Block Grant (CDBG) program funds to cover part of the costs of their lead hazard control grants. CDBG program funds can be used by states and local communities for housing; economic development; neighborhood revitalization; and other community development activities. For example, grantees in Baltimore and Memphis noted in their grant applications that they planned to use the funds to cover costs related to personnel, operations, and training. Nongovernmental contributions or discounts. Eight of 20 grantees we selected stated that they anticipated some forms of nongovernmental contributions from nonprofit organizations or discounts from contractors to supplement the lead grants. For example, all eight grantees stated that they expected to receive matching contributions from nonprofit organizations. Table 2 summarizes the nonfederal funds by source that the 20 selected grantees planned to use, based on our review of these grantees’ applications. Furthermore, almost all of the selected grantees stated in their grant applications or told us that they expected to receive or have received other nonfederal funds in excess of their matching contributions. For example, 15 grantees stated that they generally required or encouraged property owners or landlords to contribute toward the lead hazard remediation costs. Also, grantees in Baltimore, District of Columbia, Lewiston, and Providence indicated that they expected to receive monetary or in-kind donations from organizations to help carry out lead hazard remediation, blood lead-level testing, or training. Additionally, the grantee in Alameda County (California) told us that they have received nonfederal funds from a litigation settlement with a private paint manufacturer. Appendix II: Objectives, Scope, and Methodology This report examines the Department of Housing and Urban Development’s (HUD) efforts to (1) incorporate statutory requirements and other relevant federal standards in its lead grant programs; (2) monitor and enforce compliance with lead paint regulations for its rental assistance programs; (3) adopt federal health guidelines and environmental standards for lead hazards in its lead grant and rental assistance programs; and (4) measure and report on its performance related to making housing lead-safe. In this report, we examine lead paint hazards in housing, and we focus on HUD’s lead hazard control grant programs and its two largest rental assistance programs that serve the most families with children: the Housing Choice Voucher (voucher) and public housing programs. To address all four objectives, we reviewed relevant laws, such as the Residential Lead-Based Paint Hazard Reduction Act (Title X of the Housing and Community Development Act of 1992, referred to as Title X throughout this appendix) and relevant HUD regulations, such as the Lead Safe Housing Rule and a January 2017 amendment to this rule. To examine trends in funding for HUD’s lead grant programs for the past 10 years, we also reviewed HUD’s budget information for fiscal years 2008 through 2017. We interviewed HUD staff from the Office of Lead Hazard Control and Healthy Homes (Lead Office), Office of Public and Indian Housing (PIH), Office of Policy Development and Research (PD&R), and other relevant HUD program and field offices. Finally, we reviewed our prior work and those of HUD’s Office of Inspector General. To address the first objective, we reviewed HUD’s Notices of Funding Availability (funding notices), policies, and procedures to identify HUD’s grant award processes for the Lead-Based Paint Hazard Control grant and Lead Hazard Reduction Demonstration grant programs. For example, we reviewed HUD’s annual notices of funding availability from 2013 through 2017 to identify HUD’s scoring factors for evaluating grant applications. We compared HUD’s grant award processes in 2017 with Title X statutory requirements, the Office of Management and Budget (OMB) requirements for awarding federal grants, and relevant federal internal control standards. We also interviewed HUD staff about the agency’s grant application review and award processes. To determine the extent to which HUD’s grants have gone to counties in the United States potentially at high risk for lead paint hazards, we compared grantee locations from HUD’s lead grant data for grants awarded from 2013 through 2017 with county-level data on two indicators of lead paint hazard risk from the 2011–2015 American Community Survey—a continuous survey of households conducted by the U.S. Census Bureau. We analyzed HUD’s grant data to determine the number and dollar amount of grants received by each grantee, and the grantees’ addresses. We then conducted a geographic analysis to determine whether each HUD lead grant went to a county that met at least one, both, or neither of the two commonly known indicators of lead paint hazard risk—the age of housing and poverty level. We identified these two indicators through a review of relevant academic literature, agency research, and state lead modelling methodologies. We used data from the 2011–2015 American Community Survey because the data covered a time frame that best aligned with the 5 years of lead grant data (2013 through 2017). Using its county-level data, we calculated an estimated average percentage nationwide of housing units built before 1980 (56.9 percent) and an estimated average percentage nationwide of individuals living below the poverty level (17.5 percent). We used 1980 as a benchmark for age of housing because the American Community Survey data for age of housing is separated by the decade of construction and 1980 was closest in time to the 1978 federal lead paint ban. We categorized counties based on whether their levels of pre-1980 housing and poverty were above one, both, or neither of the respective national average percentage for each indicator. The estimated average nationwide and county-level percentages of the two indicators (e.g., older housing and poverty rate) are expressed as a range of values. For the lower and upper ends of the range, we generated a 95 percent confidence interval that was within plus or minus 20 percentage points. We classified a county as above the estimated average percentages nationwide if the county’s confidence interval was higher and did not overlap with the nationwide estimate’s confidence interval. We omitted the data for 12 counties that we determined were unreliable for our purposes. We analyzed data starting in 2013 because that was the first year for which these grant data were available electronically. We also interviewed HUD staff to understand their efforts and plans to perform similar analyses using indicators of lead paint hazard risk. To assess the reliability of HUD’s grant data, we reviewed documentation of HUD’s grant database, interviewed Lead Office staff on the processes HUD used to collect and ensure the reliability of the data, and tested the data for missing values, outliers, and obvious errors. To assess the reliability of the American Community Survey data, we reviewed statistical information from the Census Bureau and other publicly available documentation on the survey and conducted electronic testing of the data. We determined that the HUD grant data and American Community Survey county-level data on age of housing and poverty were sufficiently reliable for identifying areas at risk of lead paint hazards and determining the extent to which lead grants from 2013 through 2017 have gone to at-risk areas. Furthermore, to obtain information about how HUD works with grantees to achieve program objectives, we conducted in-person site visits to five grantees located in five localities (Alameda County, California; Atlanta, Georgia; Baltimore, Maryland; District of Columbia; and San Francisco, California); and interviewed an additional five grantees on the telephone (Hennepin County, Minnesota; Lewiston, Maine; Malden, Massachusetts; Providence, Rhode Island; and Winnebago County, Illinois). In addition, we reviewed the grant applications of the 10 grantees we spoke to and an additional 10 grantees from 10 additional jurisdictions (State of Connecticut; Cuyahoga County, Ohio; Denver, Colorado; Monroe County, New York; Philadelphia, Pennsylvania; Memphis, Tennessee; San Antonio, Texas; St. Louis, Missouri; Tucson, Arizona; and State of Vermont). We selected the 10 grantees for site visits or interviews based on the following criteria: geographic variation, number of years the grantees had HUD’s lead grants, and grantees that have received both types of lead grants from 2013 through 2017. We selected the 10 additional grantees’ applications for review based on geographic diversity and to achieve a total of two applications for each year during our 5-year time frame, with at least one application from each of the two HUD lead grant programs. As part of our review of selected grant applications, we identified nonfederal funding sources used by grantees, such as local tax revenues, contractor discounts, and property owner contributions. Information from the selected grantees and grant applications review cannot be generalized to those grantees we did not include in our review. Additionally, we interviewed representatives from housing organizations to obtain additional examples of any nonfederal funding sources, such as state or local bond measures, or low-interest loans to homeowners. To address the second objective, we also reviewed HUD guidance and internal memorandums related to its efforts to monitor and enforce compliance with lead paint regulations for public housing agencies (PHA), the entities that manage HUD’s voucher and public housing rental assistance programs. In addition, we reviewed HUD’s documentation of databases it uses to monitor compliance, including the Lead-Based Paint Response Tracker and the Elevated Blood Lead Level Tracker, and observed HUD staff’s demonstrations of these databases. HUD staff also provided a demonstration of the Record and Process Inspection Data database (known as “RAPID”) used by HUD’s Real Estate Assessment Center to collect physical inspection data for public housing units. We obtained and reviewed information from HUD about instances of potential noncompliance with lead paint regulations by PHAs as of November 2017 and enforcement actions HUD has taken. We compared HUD’s regulatory compliance monitoring and enforcement approach to federal internal control standards. We interviewed staff from HUD’s Lead Office, Office of General Counsel, Office of Field Operations, and field staff, including four HUD regional directors in areas of the country known to have a high prevalence of lead paint hazards, about internal procedures for monitoring and enforcing compliance with lead paint regulations by the PHAs within their respective regions. To address the third objective on HUD’s adoption of federal health guidelines and environmental standards for lead paint hazards in its lead grant and rental assistance programs, we reviewed relevant rules and HUD documentation. To identify relevant federal health guidelines and environmental standards, we reviewed guidelines and regulations from the Centers for Disease Control and Prevention (CDC) and the Environmental Protection Agency (EPA) and interviewed staff from each agency. To identify state and local laws with different requirements than these federal guidelines and standards, we obtained information from and interviewed staff from CDC’s Public Health Law Program and the National Conference of State Legislatures. We compared HUD’s requirements to CDC’s health guideline known as the “blood lead reference value” and EPA’s standards for lead-based paint hazards and lead-dust clearance standards. Finally, we reviewed information in HUD’s 2017 funding notices and lead grant programs’ policy guidance about requirements for grantees as they pertain to health guidelines and environmental standards. We also interviewed HUD staff about how HUD has used the findings from lead technical study grants to consider changes to HUD’s requirements and processes regarding identifying and addressing lead paint hazards for the grant programs. To address the fourth objective, we reviewed HUD documentation related to performance goals and measures, program evaluations, and reporting. For example, we reviewed HUD’s recent annual performance reports to identify goals and performance measures related to HUD’s efforts to make housing lead-safe. Further, we reviewed Title X to identify requirements related to evaluating and reporting on HUD’s lead efforts. We reviewed program evaluations and related studies completed by outside experts for the lead grant programs and interviewed staff from one of the organizations that conducted the evaluations. In addition, we interviewed Lead Office and PD&R staff about the agency’s plans to evaluate the requirements in the Lead Safe Housing Rule and reviewed corresponding agency documentation about these plans. Additionally, we reviewed the Lead Office’s most recent strategic plan (2009) and annual report (1997) on the agency’s lead efforts. We compared HUD’s use of performance goals and measures, program evaluations, and reporting against leading practices for assessing program performance and federal internal control standards. Finally, we interviewed staff from HUD to understand goals and performance measures used by the agency to assess their lead efforts. We conducted this performance audit from March 2017 to June 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: Comments from the Department of Housing and Urban Development Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, John Fisher (Assistant Director), Beth Faraguna (Analyst in Charge), Enyinnaya David Aja, Farah Angersola, Carol Bray, William R. Chatlos, Anna Chung, Melinda Cordero, Elizabeth Dretsch, Christopher Lee, Marc Molino, Rebecca Parkhurst, Tovah Rom, Tyler Spunaugle, and Sonya Vartivarian made key contributions to this report.
Why GAO Did This Study Lead paint in housing is the most common source of lead exposure for U.S. children. HUD awards grants to state and local governments to reduce lead paint hazards in housing and oversees compliance with lead paint regulations in its rental assistance programs. The 2017 Consolidated Appropriations Act, Joint Explanatory Statement, includes a provision that GAO review HUD’s efforts to address lead paint hazards. This report examines HUD’s efforts to (1) incorporate statutory requirements and other relevant federal standards in its lead grant programs, (2) monitor and enforce compliance with lead paint regulations in its rental assistance programs, (3) adopt federal health guidelines and environmental standards for its lead grant and rental assistance programs, and (4) measure and report on the performance of its lead efforts. GAO reviewed HUD documents and data related to its grant programs, compliance efforts, performance measures, and reporting. GAO also interviewed HUD staff and some grantees. What GAO Found The Department of Housing and Urban Development’s (HUD) lead grant and rental assistance programs have taken steps to address lead paint hazards, but opportunities exist for improvement. For example, in 2016, HUD began using new tools to monitor how public housing agencies comply with lead paint regulations. However, HUD could further improve efforts in the following areas: Lead grant programs. While its recent grant award processes incorporate statutory requirements on applicant eligibility and selection criteria, HUD has not fully documented or evaluated these processes. For example, HUD’s guidance is not sufficiently detailed to ensure consistent and appropriate grant award decisions. Better documentation and evaluation of HUD’s grant program processes could help ensure that lead grants reach areas at risk of lead paint hazards. Further, HUD has not developed specific time frames for using available local-level data to better identify areas of the country at risk for lead paint hazards, which could help HUD target its limited resources. Oversight. HUD does not have a plan to mitigate and address risks related to noncompliance with lead paint regulations by public housing agencies. We identified several limitations with HUD’s monitoring efforts, including reliance on public housing agencies’ self-certifying compliance with lead paint regulations and challenges identifying children with elevated blood lead levels. Additionally, HUD lacks detailed procedures for addressing noncompliance consistently and in a timely manner. Developing a plan and detailed procedures to address noncompliance with lead paint regulations could strengthen HUD’s oversight of public housing agencies. Inspections. The lead inspection standard for the Housing Choice Voucher program is less strict than that of the public housing program. By requesting and obtaining statutory authority to amend the standard for the voucher program, HUD would be positioned to take steps to better protect children in voucher units from lead exposure as indicated by analysis of benefits and costs. Performance assessment and reporting. HUD lacks comprehensive goals and performance measures for its lead reduction efforts. In addition, it has not complied with annual statutory reporting requirements, last reporting as required on its lead efforts in 1997. Without better performance assessment and reporting, HUD cannot fully assess the effectiveness of its lead efforts. What GAO Recommends GAO makes nine recommendations to HUD including to improve lead grant program and compliance monitoring processes, request authority to amend its lead inspection standard in the voucher program, and take additional steps to report on progress. HUD generally agreed with eight of the recommendations. HUD disagreed that it should request authority to use a specific, stricter inspection standard. GAO revised this recommendation to allow HUD greater flexibility to amend its current inspection standard as indicated by analysis of the benefits and costs.
gao_GAO-18-349T
gao_GAO-18-349T_0
Background VA’s Disability Compensation Claims Process VA’s process for deciding veterans’ eligibility for disability compensation begins when a veteran submits a claim to VA. The veteran submits his or her claim to one of VBA’s 56 regional offices, where staff members assist the veteran by gathering additional evidence, such as military and medical records, that is needed to evaluate the claim. Based on this evidence, VBA decides whether the veteran is entitled to compensation and, if so, how much. A veteran dissatisfied with the initial claim decision can generally appeal within 1 year from the date of the notification letter VBA sends to the veteran. Under the current appeals process (now referred to by VA as the legacy process), an appeal begins with the veteran filing a Notice of Disagreement. VBA then re-examines the case and generally issues a Statement of the Case that represents its decision. A veteran dissatisfied with VBA’s decision can file an appeal with the Board. In filing that appeal, the veteran can indicate whether a Board hearing is desired. Before the Board reviews the appeal, VBA prepares the file and certifies it as ready for Board review. If the veteran requests a hearing to present new evidence or arguments, the Board will hold a hearing by videoconference or at a local VBA regional office. The Board’s members, also known as Veterans Law Judges, review the evidence and either issue a decision to grant or deny the veteran’s appeal or refer (or remand) the appeal back to VBA for further work. New Appeals Process The 2017 Act made changes to VA’s legacy appeals process that will generally take effect no earlier than February 2019, which is approximately 18 months from the date of enactment. According to its appeals plan, VA intends to implement the Act by replacing the current appeals process with a process offering veterans who are dissatisfied with VBA’s decision on their claim one of five options: two of those options afford the veteran an opportunity for an additional review of VBA’s decision within VBA, and the other three options afford them the opportunity to bypass additional VBA review and appeal directly to the Board. Under the new appeals process, the two VBA options will be: 1. Request higher-level review: The veteran asks VBA to review its initial decision based on the same evidence but with a higher-level official reviewing and issuing a new decision. 2. File supplemental claim: The veteran provides additional evidence and files a supplemental claim with VBA for a new decision on the claim. The three Board options will be: 3. Request Board review of existing record: The veteran appeals to the Board and asks it to review only the existing record without a hearing. 4. Request Board review of additional evidence, without a hearing. 5. Request Board review of additional evidence, with a hearing. VA’s Appeals Plan The Act also requires VA to submit to the appropriate committees of Congress and GAO, within 90 days of the date of enactment, a comprehensive plan for (1) processing appeals under the legacy process until there are no more to process, (2) implementing the new appeals process, (3) processing of claims under the new appeals process in a timely manner, and (4) monitoring implementation of the new appeals process. In addition to these four broad elements, the Act lists 18 elements required to be included in the plan that relate to, among other things: staffing, information technology (IT), and other resources required to implement the plan; estimated timelines for hiring and training VA employees; and a description of risks associated with each element of the plan. The Act also includes a provision for GAO to assess the plan within 90 days after VA submits it. The Act also requires VA to provide progress reports to the appropriate committees of Congress and GAO at least once every 90 days (starting after VA submits its plan), until the date the Act’s legal changes to the appeals process generally go into effect and then at least once every 180 days after this date for 7 years. Rapid Appeals Modernization Program (RAMP) The Act also authorized VA to carry out a program to test any assumptions relied upon in developing its comprehensive plan and test the feasibility and advisability of any facet of the new appeals process. In its appeals plan, VA reported its decision to pilot test two of the five new options by allowing veterans with pending appeals in the legacy process (known as legacy appeals) to elect the VBA supplemental claim or the higher-level review options beginning in November 2017. This program, which VA refers to as RAMP, is intended to reduce legacy appeals by providing veterans with a chance for early resolution of their claims within VBA while the Board focuses on reducing its inventory of legacy appeals, according to VA. Participation in RAMP is voluntary, but veterans must withdraw their pending legacy appeal to participate, according to VA. Veterans dissatisfied with their RAMP decisions must wait until VA fully implements the new appeals process (in February 2019 at the earliest) before pursuing an appeal with the Board under the new process, according to VA officials. VA’s Plan Addresses Most of the Act’s Required Elements for the New and Legacy Disability Appeals Processes VA’s appeals plan addresses 17 of the Act’s 22 required elements, partially addresses 4 related to monitoring implementation and workforce planning, and does not address 1 element related to identifying total resources. For example, VA’s appeals plan addresses the required elements related to, among others, identifying legal authorities for hiring and removing employees, estimating timelines for hiring and training employees, and outlining the outreach VA expects to conduct. For the elements in the Act that VA’s appeals plan partially addresses or does not address, see table 1. For a detailed list of the 22 required elements in the Act, see appendix I. When we provided VA with our preliminary assessment, VA officials said they disagreed with our assessment and that their appeals plan addresses all 22 of the required elements. In general, they said that data are not available, and VA cannot yet forecast the information required by the Act until aspects of the new appeals process are tested or implemented. We continue to believe the information as presented in VA’s appeals plan and supplemental materials addresses 17 of the required elements, partially addresses 4, and does not address 1 element. Without complete information on all 22 of the required elements, Congress does not have the information it needs to fully conduct oversight of VA’s appeals plan and the agency’s efforts to implement and administer the new process while addressing legacy appeals. VA also is required to provide information on resources, among other areas, before it can certify that the agency is prepared to carry out timely processing of appeals under the new and legacy appeals process. Further, as discussed below, addressing required elements through a more comprehensive plan and underlying analysis is consistent with sound planning practices and would better position VA to implement the new appeals process while attending to legacy appeals; for example, a plan that provides for carefully monitoring the new and legacy appeals processes against balanced goals and metrics, and clearly articulates resources, milestones and other information needed for effective program management. VA’s Appeals Plan Reflects Certain Sound Planning Practices, but Could Improve on Others VA’s appeals plan reflects certain sound planning practices, such as convening a working group on performance tracking; however, the plan could benefit from including important details related to three key planning areas: 1. articulating a balanced set of goals and related measures to monitor and assess the performance of the new appeals process, in conjunction with the legacy process; 2. developing a high-quality and reliable implementation schedule to manage key steps and activities of the project; and 3. assessing key risks in a comprehensive manner, including respective mitigation strategies, and articulating clear criteria and an assessment plan for RAMP, and more fully testing or analyzing all appeal options. VA’s Appeals Plan Indicates Steps to Assess Process Changes, but Should Also Include Goals and Measures to Provide Full Picture of Success VA’s appeals plan reflects steps taken to track performance, but it could improve its planning practices related to monitoring and assessing performance on a range of key dimensions of success. Sound planning practices suggest that agencies develop overall goals tied to meaningful and balanced performance measures. These measures include a mix of outcome, output, and efficiency measures to ensure that an organization’s priorities—as well as government-wide priorities such as quality, timeliness, and cost of service—are addressed. VA’s appeals plan reports that the agency convened a working group to design a process for tracking timeliness of both the legacy appeals and appeals within the new process. In supporting documentation that we requested, VA officials stated they are also determining the best way to measure veterans’ satisfaction with the new appeals process. VA’s appeals plan and supporting documentation also identify timeliness goals for the two VBA-only options and one of the three Board options. Nevertheless, its appeals plan does not articulate a set of goals and measures that cover all aspects of its new appeals process, such as accuracy of decisions and cost. The plan also does not provide details on the metrics the agency will develop, how it will assess if the new appeals process is an improvement over the legacy appeals process, and how it will monitor the allocation of resources between legacy and new appeals claims. More specifically: VA’s reported timeliness measures are incomplete: VA’s appeals plan outlines timeliness goals for the two VBA options (average processing time of 125 days) and for the Board option that does not include new evidence or a hearing (average processing time of 365 days). However, VA’s plan does not establish timeliness goals for the other two Board options: Board review of additional evidence without a hearing and Board review of additional evidence with a hearing. In commenting on our assessment, while VA officials indicated they expect the new process to be more efficient than the legacy process (and, therefore, more timely), data to inform goal setting for all Board options will not be available until VA fully implements these options. However, establishing timeliness goals for all options would provide a more complete picture of VA’s vision for the new appeals process, and help VA to develop concrete, objective, and observable performance measures to show progress in achieving that vision, as well as inform resource estimates. VA’s reported measures lack adequate balance: Other than including certain timeliness goals, VA’s appeals plan does not articulate additional aspects of performance important for managing appeals, such as accuracy of decisions, veteran satisfaction with the process, or cost. We previously reported that VA officials said that they wanted to also use veteran survey results, wait times, and inventories as sources of information to measure progress under the new appeals process. Further, VA’s fiscal year 2018 annual performance plan includes an overall customer satisfaction score for veterans’ benefits. However, these and other potential measures of success are not specified in VA’s appeals plan for monitoring the new appeals process as compared with legacy appeals. By not articulating a set of comprehensive and balanced goals and measures in its appeals plan, VA could be inadvertently creating skewed incentives by focusing on one area of program performance to the detriment of other areas (e.g., processing claims quickly but inaccurately). In commenting on our assessment, VA officials recognized the need to develop additional goals and measures and indicated, for example, that they are developing and testing whether the existing quality assurance goal—requiring 92 percent accuracy—is appropriate for the new process. According to VA officials, once they have developed these other goals and measures, VA will communicate this information as part of the required progress reports to the appropriate committees of Congress and GAO. VA’s plan does not reflect how it will establish baseline data: VA’s approach for evaluating the efficiency and effectiveness of the implementation of the new appeals process falls short of sound practices for using baseline data to assess performance. Our prior work has demonstrated that by tracking and developing a performance baseline for all measures, including those that demonstrate the effectiveness of a program, agencies can better evaluate progress made and whether or not goals are being achieved. However, VA’s appeals plan did not provide important details about what aspects of the new appeals process’ performance will be compared to what aspects of the legacy process’ performance. In particular, section 5 of the Act lists a number of metrics VA is required to report periodically, including some that could be used as baseline measures. For example, VA is required to periodically publish on its website the average time that elapsed between the filing of an initial claim and the final resolution of the claim, for legacy appeals as well as appeals under the new system, which is consistent with our prior recommendation. However, VA’s appeals plan does not explain how or when the agency would collect and use these or other data about the legacy and new processes’ performance—such as accuracy, veteran satisfaction, and cost—to assess their relative performance. As we had previously reported, VA’s business case for reform in some instances relied on unproven assumptions and limited analyses of its legacy process to identify root causes of performance problems. Specifically, VA determined that the open-ended nature of its legacy appeals process, whereby a veteran can submit additional evidence numerous times at any point during the VA appeals process, can cause additional cycles of re-adjudication, a process VA refers to as “churning.” According to VA, this re-adjudication can occur multiple times and can add years to the time needed to reach a final decision on an appeal. Without fully articulating a plan for collecting and using baseline and trend data, VA cannot determine the extent to which the new appeals process, which also allows for multiple appeal opportunities, will achieve final resolution of veterans’ appeals sooner, on average, than the legacy process. In commenting on our assessment, VA indicated that it is working toward capturing the metrics listed in section 5 of the Act. VA officials also noted that reporting on the new appeals process will require IT system functionality that currently does not exist, but stated that efforts are underway to add this functionality. VA’s plan does not explain how the agency will monitor processing of legacy versus new appeals: In addition, VA’s appeals plan does not fully articulate how the agency will monitor whether resources are being appropriately devoted to both the new and legacy appeals process and how it will track both sets of workloads. An appeals plan that does not specifically articulate how VA will manage the two processes in parallel exposes the agency to risk that veterans with appeals in the legacy process may experience significant delays or otherwise poor results relative to those in the new appeals process or vice versa. In commenting on our assessment, VA officials noted that VA was not required under section 3 of the Act to provide a description of its plans to capture metrics listed in section 5. Even if not required by the Act, developing an approach for carefully monitoring the management of new and legacy appeals would help VA track progress being made and achievement of goals. Until VA establishes complete and balanced goals and measures, identifies baseline data, and develops a plan for monitoring and assessing both the new and legacy processes, VA runs the risk of promoting skewed behaviors, or not fully understanding whether the new process is an improvement or whether veterans with appeals in the legacy process are experiencing poor results. VA’s Appeals Plan Needs a Reliable Implementation Schedule to Manage the Project VA’s appeals plan reflects certain aspects of sound planning practices related to managing the implementation of process change; however, other key components are not addressed. Sound planning practices for implementing process change suggest establishing a transition team. Consistent with such practices, VA’s appeals plan states that the agency convened an agency-wide governance structure to coordinate implementation of its new appeals process; it is comprised of senior-level employees with authority to make necessary decisions to keep the project on track. VA’s appeals plan also includes a copy of a master schedule. In its plan, VA asserts that the master schedule reflects timelines, interim goals and milestones, reporting requirements, and established deadlines, and that it will be used to guide implementation. VA’s appeals plan also reports that VA is consulting with project management professionals, who are using the master schedule, among other tools, to monitor implementation. In addition, VA made progress addressing some of the issues we previously identified by developing steps and timetables for updating training in anticipation of implementing the new appeals process. However, VA’s master schedule for implementing reform is missing elements of a high-quality and reliable implementation schedule for key activities. We have previously reported that having a well-planned schedule is a fundamental management tool. Generally recognized sound practices from the Project Management Institute (PMI) and GAO call for organizations to employ an integrated and reliable master schedule that defines when work activities will occur, who will complete the work, how long they will take, how they are related to one another, and the constraints affecting the start and completion of work elements, as well as whether resources will be available when they are needed. Such a project management schedule not only provides a road map for systematic project execution, but also provides the means by which to gauge progress, identify and address potential problems, and promote accountability. The master schedule VA provided in its appeals plan should have included other sound practices for project management related to a reliable schedule. Specifically: Key activities and their duration are not included: VA’s master schedule does not capture the Rapid Appeals Modernization Program (RAMP) activities, even though this pilot test is occurring at the same time VA is preparing for full implementation of appeals options at VBA and the Board. In addition, specific Board-related activities are missing from the schedule, such as efforts to develop metrics, and the schedule and other project plans we reviewed do not go beyond February 2019. For example, the schedule does not indicate the period of time when VA expects to no longer be processing legacy appeals. When all key and necessary activities are not included, it raises questions about whether all activities are scheduled in the correct order, resources are properly allocated, or the estimated completion dates are reliable. In addition, if the schedule does not fully and accurately reflect VA’s efforts, it will not serve as an appropriate basis for analysis and may result in unreliable completion dates and delays. Sequencing and linkages among activities are not identified: For the high-level activities VA’s appeals plan identifies, VA’s master schedule does not indicate whether there were linkages or sequencing among them, which is not consistent with sound scheduling practices. Linkages and sequencing would show, for example, if any of these activities or sub-activities must finish prior to the start of other activities, or the amount of time an activity could be delayed before the delay affects VA’s estimated implementation date. For example, VA cannot train new employees until after it hires them. The activities VA identifies also do not appear supported by lower- level project schedules. Specifically, when we requested documentation to support VA’s high-level summary of activities and milestones, VA officials did not provide intermediate or more detailed schedules that reflected these practices. In particular, VA’s appeals plan lacks a complete schedule for IT modifications that clearly defines what is to be achieved and the time frames for achievement. We previously recommended that VA develop a schedule for IT updates that explicitly addresses when and how process reform will be integrated into new systems and when these systems will be ready to support the new appeals process at its onset. For example, VA’s appeals plan references several required IT modifications that do not appear in its master schedule. Schedules that are defined at too high a level may disguise risk that is inherent in lower-level activities. Interim goals are not reflected: VA officials stated that they have interim goals and milestones, though VA’s appeals plan and supporting documentation generally do not include this information. Sound planning and redesign practices suggest closely monitoring implementation and developing project goals that include a mix of intermediate goals to be met at various stages. VA’s appeals plan does not include this information. We previously made a recommendation that VA develop a more robust plan for closely monitoring implementation of process reform, including metrics and interim goals to help track progress, evaluate efficiency and effectiveness, and identify trouble spots—all of which are consistent with sound planning practices. Resources are not assigned to all identified activities: The high- level summary schedule that VA provided us also lacks details regarding the assignment of resources for all activities. Specifically, while the plan identifies workgroups responsible for coordinating elements in the plan, such as regulations, training, and outreach, the schedule does not assign resources to the 40 listed activities. As discussed previously, VA’s appeals plan also does not provide information on the total resources required for this reform effort. Assigning resources to the listed activities, as well as providing other information, could provide a better indication of the estimated total resources required to implement the new appeals process and address legacy appeals. In commenting on our assessment, VA officials stated that the agency is developing lower-level project schedules for key activities—such as RAMP and IT requirements—and will provide these schedules as part of the required progress reports to the appropriate committees of Congress and GAO. VA officials also noted that future updates will include additional dependencies and risks, which VBA and the Board are still developing. Until VA has a robust integrated master schedule, supported by detailed project plans that adhere to sound practices, VA’s appeals plan does not provide reasonable assurance that decision makers have the essential program management information needed for this complex and important effort. VA’s Plan Addresses Some but Not All Key Risks Related to the New Appeals Process VA’s appeals plan includes an assessment of risks involved in implementing the new appeals system, but could more comprehensively reflect key risks posed by such a significant reform effort. VA’s appeals plan and supplementary materials include a “risk register” that describes risks associated with many elements of its plan and the remaining level of risk after its planned response to these risks. VA’s appeals plan also states that senior leaders will receive regular updates of risks and mitigation strategies. However, because VA has not yet articulated a balanced set of performance goals and measures in its appeals plan, it is hindered in its ability to identify and assess risks. Federal internal control standards state, and our previous work at VA and other agencies demonstrates, that establishing clear performance goals and objectives is a necessary pre-condition to effectively assessing risk. Having, for example, more complete timeliness goals, and goals and measures reflecting other areas of performance, would allow VA to better identify and target risks associated with managing two processes in parallel, including the potential that veterans with appeals in the legacy process may experience significant delays relative to those in the new appeals process. Importantly, VA is missing an opportunity to fully benefit from RAMP by not testing and assessing other aspects of the new appeals process. The Act authorizes VA to test the feasibility and advisability of any facet of the new appeals process, and VA is taking a positive step to mitigate some risks by testing the two review options available within VBA (review of a claim by a higher-level official based on the same evidence and review of a supplemental claim with additional evidence) through RAMP. In November 2017, VA began RAMP by inviting 500 veterans whose appeals have been pending the longest to participate. According to VA officials, each month VA plans to continue offering RAMP to additional eligible veterans with pending legacy appeals until January 2019—a month before VA anticipates fully implementing the new appeals system. However, as designed, RAMP does not include features that—consistent with a well-developed and documented pilot test program—would provide VA with an opportunity to evaluate fully the soundness of new processes and practices on a smaller scale. Specifically: VA’s plan does not clearly define success criteria for RAMP: VA’s appeals plan states that the agency will collect certain data from RAMP, such as the rate at which eligible veterans opt into the process, timeliness of claims processing, and individual employee productivity. VA also established an overall average processing time goal of 125 days for the two VBA options; however, the plan and supporting documentation do not clearly articulate whether RAMP reviews are expected to meet this timeliness goal. The plan also did not identify other success criteria for RAMP or the types of results expected before fully implementing the new appeals process. For example, VA’s plan does not articulate the expected number and type of subsequent appeals to the Board that result from RAMP. In commenting on this assessment, VA noted that its intent in implementing RAMP was to collect data and test aspects of the new process, and that RAMP was not an initiative in and of itself. However, developing performance measures and data gathering procedures and defining success criteria for a pilot test before proceeding to full implementation are sound practices for process redesign and pilot testing. In addition, because RAMP was not included in VA’s risk assessment, we asked VA if it had identified any risks or mitigation strategies specific to RAMP. In its supplemental materials, VA stated that the greatest risk to RAMP is a low participation rate among eligible veterans with legacy claims. VA also indicated that it would need 10 percent of eligible veterans to opt into RAMP to yield meaningful results. However, this threshold is not articulated in VA’s appeals plan as an explicit success criterion or objective. According to data provided by VA, as of January 22, 2018, 238 veterans opted in. Of veterans with pending claims in RAMP, two-thirds chose the higher-level review option. VA also reported that 47 RAMP decisions have been made so far. As of yet, no appeals of RAMP decisions have been filed. VA’s plan does not articulate how it will assess RAMP before proceeding with full implementation: Although VA’s appeals plan describes a “close-out” phase in which VA intends to assess the results of RAMP, it does not detail the conditions that would have to be met (or not met) to trigger changes. For example, VA’s plan does not explain when or how it might respond to low opt-in rates for RAMP—other than stating it will increase outreach to eligible veterans—or to unexpectedly high appeal rates to the Board resulting from RAMP decisions. Sound redesign and change management practices both suggest that pilot tests be rigorously monitored and evaluated, and that further roll-out occur only after an agency’s transition team takes any needed corrective action and determines that the new process is achieving previously identified success criteria. Without fully articulating its plan for deciding how and when to roll out changes more broadly, it is not clear whether VA would be prepared to fully implement a new appeals process that achieves its aim of better serving veterans. RAMP does not test all aspects of the new appeals process: RAMP provides an opportunity to learn about experiences at VBA under the new system, such as the rate at which eligible veterans choose those options and the resources that will be required to process their appeals. However, RAMP was not designed to test how many veterans would choose to appeal directly to the Board and, therefore, it will not provide comparable information on the Board appeals options. Sound workforce planning practices suggest that agencies identify the total resources needed to manage the risk of implementing new processes and conduct scenario planning to determine those needs. In addition, although we previously recommended VA conduct additional sensitivity analyses to inform projections of future appeals inventories, VA’s appeals plan does not reflect VA’s use or intended use of sensitivity analyses when projecting staffing needs for new appeals options at the Board. In commenting on our assessment, VA officials said they do not plan to conduct additional sensitivity analyses to project future workloads until they have more information from RAMP to inform their assumptions. As a result, VA will lack data on scenarios in which veterans may overwhelmingly choose options available at the Board over those at VBA when the appeals plan is fully implemented. This presents a risk that VA’s early production projections and initial resource allocations may not be properly balanced between the Board and VBA. This, in turn, may result in an unexpectedly large number of appeals pending with the Board, and corresponding lengthy average wait and decision times for some, if not all, Board options. Having information on the number of veterans who are likely to appeal to the Board is particularly critical, given that similar efforts to create additional review options at VBA did not achieve their goals of reducing the percentage of appeals that continue on to the Board. In 2001, VA established the Decision Review Officer (DRO) process—in which senior staff have the authority to overturn an initial disability claim decision without any new evidence—to resolve more appeals at the regional level and avoid long waits at the Board. However, we reported in 2011 that, although the DRO process helped some veterans get additional benefits at the regional office level, it did not accomplish the program’s primary goal of reducing the percentage of appeals continuing on to the Board. In responding to our assessment, VA officials reiterated their plans to increase outreach in the event of low opt-in rates for RAMP and indicated they recently began to send follow-up RAMP invitation letters. With respect to assessing all appeal options, VA officials stated that, while no legal bar prevents testing of the Board options, the Board is focused on reducing its inventory of pending appeals while RAMP provides early resolution of appeals within the new VBA-only options. Officials conceded that this approach means they cannot collect data on the rate at which veterans opt to appeal directly to the Board (e.g., bypassing additional VBA review) until the new process is fully implemented. However, they noted that they can collect some data on the rate at which veterans whose appeals go through RAMP file subsequent appeals to the Board, even though the Board will not begin processing those appeals until full implementation. By pursuing an approach that does not identify or mitigate significant risks associated with implementing a new process, VA is taking a chance that untested aspects will not perform as desired. The Act provides VA authority to pilot aspects of the process and flexibility on the timing of implementing the new process, which could allow some additional time for VA to carefully measure performance under RAMP and determine whether any corrective actions are necessary. If VA does not take full advantage of this authority, it risks moving forward without knowing whether the new appeals process improves experiences for veterans, and potentially implementing a process that is more expensive or results in longer wait times than originally anticipated. In conclusion, in implementing appeals reform after the enactment of the Veterans Appeals Improvement and Modernization Act of 2017, VA is undertaking a complex endeavor that has the potential to affect the lives of hundreds of thousands of veterans with service-connected disabilities. Such an endeavor demands a commensurate level of planning to be successful. While the Act required VA to submit its plan within 90 days of enactment, VA had proposed and began to plan for appeals reform much earlier, and had our March 2017 recommendations to guide its planning efforts from a foundation of sound practices. VA’s November 2017 appeals plan is a positive step forward. Certain elements of the plan—such as establishing an agency-wide governance structure to oversee implementation and testing aspects of reform prior to full implementation—are notable gains since our March 2017 report. At the same time, the plan partially addresses or does not address five of the required elements called for by the Act, such as delineating the total resources required by VBA and the Board to implement and administer the new appeals process and address legacy appeals. The plan also is not fully responsive to our past recommendations and does not reflect a number of sound planning practices that are essential for gauging progress, establishing accountability, and linking resources to results. One such key practice is articulating a desired “end state”—a vision for what successful implementation would look like for the new appeals process as well as the wind-down of the legacy process, such as accurate and timely processing of appeals while ensuring veteran satisfaction. Without establishing a complete and balanced set of goals and related performance measures to achieve this end state and monitoring and assessing progress along the way, VA risks falling short of its overarching objective—to improve timeliness of appeals decisions for veterans overall. By not fully articulating how it plans to monitor workloads and devote resources to both the new and legacy processes, VA runs the risk of disadvantaging veterans with legacy appeals relative to those in the new process, or vice versa. Just as important is establishing a robust integrated master schedule— rather than a high-level timeline—that is built upon and clearly reflects extensive detailed planning and includes all of the activities necessary to execute the program and interdependencies between these activities. Without such a road map, VA’s appeals plan does not provide reasonable assurance that decision makers have the essential information needed to manage this complex and important program. We are encouraged that VA has taken some steps toward assessing risks, including establishing a risk register and implementing RAMP to collect information on the two VBA appeals options; however, unless VA assesses risks against a balanced set of goals and measures, VA may not be fully aware of risks that may impede successful implementation of appeals reform. Further, although VA will undoubtedly learn from the RAMP experience, it may not learn all that it should from its efforts without (1) establishing clear criteria for what success looks like (or the circumstances that would cause VA to consider making course corrections) and (2) building in time to take stock of the lessons learned before moving to full implementation. VA’s plan places a lot of weight on RAMP to, among other efforts, mitigate risk and generate estimates of the resources needed for successful implementation after fiscal year 2018, even though RAMP does not fully test options for appealing to the Board that will be available to veterans after full implementation. Unless VA addresses key risks associated with fully implementing appeals reform—by either testing or conducting sensitivity analyses for all five appeals options, to better understand potential workloads at the Board—VA runs the risk of fully implementing the process without knowing if it is improving the process for veterans. In our forthcoming report, we anticipate making recommendations to address these issues. Specifically, we are preliminarily considering recommending that the Secretary of Veterans Affairs: address all of the required elements in the Act in VA’s appeals plan to Congress—including delineating resources required for all VBA and Board appeals options—using sensitivity analyses and RAMP results, where appropriate and needed. clearly articulate in VA’s appeals plan how VA will monitor and assess the new appeals process compared to the legacy process, including specifying a balanced set of goals and measures—such as timeliness goals for all VBA appeals options and Board dockets, and measures of accuracy, veteran satisfaction, and cost—and related baseline data. augment the master schedule for VA’s appeals plan to reflect all activities—such as RAMP and modifications to IT systems—as well as assigned responsibilities, interdependencies, start and end dates for key activities for each workgroup, and resources, to establish accountability and reduce overall risk of implementation failures. ensure that the appeals plan more fully addresses risk associated with appeals reform—for example, by assessing risks against a balanced set of goals and measures, articulating success criteria and an assessment plan for RAMP, and testing or conducting sensitivity analyses of all appeal options—prior to fully implementing the new appeals process. Chairman Roe, Ranking Member Walz, and Members of the Committee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have at this time. GAO Contact and Staff Acknowledgments For further information about this testimony, please contact Elizabeth Curda at (202) 512-7215 or curdae@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Other key contributors to this testimony include Michele Grgich (Assistant Director), James Whitcomb (Analyst in Charge), and Rachael Chamberlin. In addition, key support was provided by Susan Aschoff, Mark Bird, David Chrisinger, Daniel Concepcion, Clifton Douglas, Alex Galuten, Nisha Hazra, Melissa Jaynes, Benjamin Licht, Patricia McClure, Sheila McCoy, Lorin Obler, Gloria Proa, Almeta Spencer, James Sweetman, Walter Vance, and Greg Whitney. Appendix I: Our Assessment of VA’s Appeals Plan Against Required Elements in the Act To assess the extent to which VA’s appeals plan addresses the required elements in the Veterans Appeals Improvement and Modernization Act of 2017 (the Act), we first identified and developed a checklist reflecting each required element for VA’s appeals plan (including sub-parts) under section 3(a) and (b) of the Act. To compare the required elements and their sub-parts against VA’s appeals plan and supplemental materials provided, we developed decision rules for determining whether the VA’s appeals plan addressed, partially addressed, or did not address each required element. Specifically, we concluded that VA’s plan addressed (or partially addressed) a required element if the plan included information related to all (or some) subparts of the requirement. We focused on the plan as presented, rather than auditing the information VA relied on in developing the plan. For example, the Act’s section 3(b)(10) required VA’s plan to include a description of the modifications to the IT systems that VBA and the Board require to carry out the new appeals system, including cost estimates and a timeline for making the IT modifications. We concluded that VA’s plan addressed all sub-parts of this element because it provided a description of required IT modifications, a reference to costs included in the Appeals Modernization IT budget, and a timeline. However, our determination that VA addressed this element should not be construed to necessarily mean that VA fully identified or described all IT requirements, or provided complete estimated costs and timelines associated with those requirements, or that the information in VA’s appeals plan comported with sound planning practices. This type of assessment was outside the scope of this objective. Table 2 summarizes our assessment of VA’s appeals plan against the 22 required elements in the Act.
Why GAO Did This Study VA's disability compensation program pays cash benefits to veterans with disabilities connected to their military service. In recent years, the number of appeals of VA's benefit decisions has been rising. For decisions made on appeal in fiscal year 2017, veterans waited an average of 3 years for resolution by either VBA or the Board, and 7 years for resolution by the Board. The Veterans Appeals Improvement and Modernization Act of 2017 makes changes to VA's current (legacy) appeals process, giving veterans new options to have their claims further reviewed by VBA or appeal directly to the Board. The Act requires VA to submit to Congress and GAO a plan for implementing a new appeals process, and includes a provision for GAO to assess VA's plan. This testimony focuses on the extent to which VA's plan: (1) addresses the required elements in the Act, and (2) reflects sound planning practices identified in prior GAO work. GAO's work entailed reviewing and assessing VA's appeals plan and related documents against sound planning practices, and soliciting VA's views on GAO's assessments. What GAO Found The Department of Veterans Affairs' (VA) plan for implementing a new disability appeals process while attending to appeals in the current process addresses most, but not all, elements required by the Veterans Appeals Improvement and Modernization Act of 2017 (Act). VA's appeals plan addresses 17 of 22 required elements, partially addresses 4, and does not address 1. For example, not addressed is the required element to include the resources needed by the Veterans Benefits Administration (VBA) and the Board of Veterans' Appeals (Board) to implement the new appeals process and address legacy appeals under the current process. VA needs this information to certify, as specified under the Act, that it has sufficient resources to implement appeals reform and make timely appeals decisions under the new and legacy processes. VA's appeals plan reflects certain sound planning practices, but it could benefit from including important details in several key planning areas: Performance measurement : VA's plan reflects steps taken to track performance, but could articulate a more complete and balanced set of goals and measures for monitoring and assessing performance on a range of dimensions of success. Specifically, the plan reports that VA is developing a process to track timeliness of the new and legacy processes. However, contrary to sound planning practices, the plan does not include timeliness goals for all five appeals options available to veterans, does not include goals or measures for additional aspects of performance (such as accuracy or cost), and does not explain how VA will monitor or assess the new process compared to the legacy process. Unless VA clearly articulates a complete and balanced set of goals and measures, it could inadvertently incentivize staff to focus on certain aspects of appeals performance over others or fail to improve overall service to veterans. Project management : VA's plan includes a master schedule for implementing the new appeals plan; however, this schedule falls short of sound practices because it does not include key planned activities—such as its pilot test of two of the five appeals options. In addition, the schedule does not reflect other sound practices for guiding implementation and establishing accountability—such as articulating interim goals and needed resources for, and interdependencies among, activities. Unless VA augments its master schedule to include all key activities and reflect sound practices, VA may be unable to provide reasonable assurance that it has the essential program management information needed for this complex and important effort. Risk assessment : VA has taken steps to assess and mitigate some risks related to appeals reform by, for example, pilot testing two of the five appeals options through its Rapid Appeals Modernization Program (RAMP). However, as designed, RAMP does not include key features of a well-developed and documented pilot test. For example, VA has not articulated how it will assess RAMP before proceeding with full implementation. In addition, RAMP is not pilot testing three options and, as a result, VA will not have data on the extent to which veterans will appeal directly to the Board when given the option. Unless VA identifies and mitigates key risks associated with implementing a new process, VA is taking a chance that untested aspects will not perform as desired. What GAO Recommends In its forthcoming report, GAO is considering recommending that VA: fully address all legally required elements in its appeals plan, articulate how it will monitor and assess the new appeals process as compared to the legacy process, augment its master schedule for implementation, and more fully address risk.